Macher, Georg; Armengaud, Eric; Bacciu, Davide; Dobaj, Jürgen; Dzambic, Maid; Seidl, Matthias; Veledar, Omar Dependable Integration Concepts for Human-Centric AI-based Systems Workshop Proceedings of the 16th International Workshop on Dependable Smart Embedded Cyber-Physical Systems and Systems-of-Systems (DECSoS 2021), 2021. Bacciu, Davide; Akarmazyan, Siranush; Armengaud, Eric; Bacco, Manlio; Bravos, George; Calandra, Calogero; Carlini, Emanuele; Carta, Antonio; Cassara, Pietro; Coppola, Massimo; Davalas, Charalampos; Dazzi, Patrizio; Degennaro, Maria Carmela; Sarli, Daniele Di; Dobaj, Jürgen; Gallicchio, Claudio; Girbal, Sylvain; Gotta, Alberto; Groppo, Riccardo; Lomonaco, Vincenzo; Macher, Georg; Mazzei, Daniele; Mencagli, Gabriele; Michail, Dimitrios; Micheli, Alessio; Peroglio, Roberta; Petroni, Salvatore; Potenza, Rosaria; Pourdanesh, Farank; Sardianos, Christos; Tserpes, Konstantinos; Tagliabò, Fulvio; Valtl, Jakob; Varlamis, Iraklis; Veledar, Omar (Ed.) TEACHING - Trustworthy autonomous cyber-physical applications through human-centred intelligence Conference Proceedings of the 2021 IEEE International Conference on Omni-Layer Intelligent Systems (COINS) , 2021. Rosasco, Andrea; Carta, Antonio; Cossu, Andrea; Lomonaco, Vincenzo; Bacciu, Davide Distilled Replay: Overcoming Forgetting through Synthetic Samples Workshop IJCAI 2021 workshop on continual semi-supervised learning (CSSL 2021) , 2021. Atzeni, Daniele; Bacciu, Davide; Errica, Federico; Micheli, Alessio Modeling Edge Features with Deep Bayesian Graph Networks Conference Proceedings of the International Joint Conference on Neural Networks (IJCNN 2021), IEEE IEEE, 2021. Numeroso, Danilo; Bacciu, Davide MEG: Generating Molecular Counterfactual Explanations for Deep Graph Networks Conference Proceedings of the International Joint Conference on Neural Networks (IJCNN 2021), IEEE 2021. Bacciu, Davide; Sarli, Daniele Di; Faraji, Pouria; Gallicchio, Claudio; Micheli, Alessio Federated Reservoir Computing Neural Networks Conference Proceedings of the International Joint Conference on Neural Networks (IJCNN 2021), IEEE, 2021. Bacciu, Davide; Podda, Marco GraphGen-Redux: a Fast and Lightweight Recurrent Model for Labeled Graph Generation Conference Proceedings of the International Joint Conference on Neural Networks (IJCNN 2021), IEEE 2021. Errica, Federico; Bacciu, Davide; Micheli, Alessio Graph Mixture Density Networks Conference Proceedings of the 38th International Conference on Machine Learning (ICML 2021), PMLR, 2021. Lomonaco, Vincenzo; Pellegrini, Lorenzo; Cossu, Andrea; Carta, Antonio; Graffieti, Gabriele; Hayes, Tyler L; Lange, Matthias De; Masana, Marc; Pomponi, Jary; van de Ven, Gido; Mundt, Martin; She, Qi; Cooper, Keiland; Forest, Jeremy; Belouadah, Eden; Calderara, Simone; Parisi, German I; Cuzzolin, Fabio; Tolias, Andreas; Scardapane, Simone; Antiga, Luca; Amhad, Subutai; Popescu, Adrian; Kanan, Christopher; van de Weijer, Joost; Tuytelaars, Tinne; Bacciu, Davide; Maltoni, Davide Avalanche: an End-to-End Library for Continual Learning Workshop Proceedings of the CVPR 2021 Workshop on Continual Learning , IEEE, 2021. Sattar, Asma; Bacciu, Davide Context-aware Graph Convolutional Autoencoder Conference Proceedings of the 16th International Work Conference on Artificial Neural Networks (IWANN 2021), vol. 12862, LNCS Springer, 2021. Bacciu, Davide; Sarli, Daniele Di; Gallicchio, Claudio; Micheli, Alessio; Puccinelli, Niccolo Benchmarking Reservoir and Recurrent Neural Networks for Human State and Activity Recognition Conference Proceedings of the 16th International Work Conference on Artificial Neural Networks (IWANN 2021), vol. 12862, Springer, 2021. Carta, Antonio; Cossu, Andrea; Errica, Federico; Bacciu, Davide Catastrophic Forgetting in Deep Graph Networks: an Introductory Benchmark for Graph Classification Workshop The Web Conference 2021 Workshop on Graph Learning Benchmarks (GLB21), 2021. Ronchetti, Matteo; Bacciu, Davide Generative Tomography Reconstruction Workshop 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Workshop on Deep Learning and Inverse Problems, 2020. Bacciu, Davide; Conte, Alessio; Grossi, Roberto; Landolfi, Francesco; Marino, Andrea K-plex Cover Pooling for Graph Neural Networks Workshop 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Workshop on Learning Meets Combinatorial Algorithms, 2020. Bacciu, Davide; Numeroso, Danilo Explaining Deep Graph Networks with Molecular Counterfactuals Workshop 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Workshop on Machine Learning for Molecules - Accepted as Contributed Talk (Oral), 2020. Carta, Antonio; Sperduti, Alessandro; Bacciu, Davide Short-Term Memory Optimization in Recurrent Neural Networks by Autoencoder-based Initialization Workshop 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Workshop on Beyond BackPropagation: Novel Ideas for Training Neural Architectures, 2020. Castellana, Daniele; Bacciu, Davide Learning from Non-Binary Constituency Trees via Tensor Decomposition Conference PROCEEDINGS OF THE 2020 INTERNATIONAL CONFERENCE ON COMPUTATIONAL LINGUISTICS (COLING 2020), 2020. Valenti, Andrea; Barsotti, Michele; Brondi, Raffaello; Bacciu, Davide; Ascari, Luca Proceedings of the 2020 IEEE International Conference on Systems, Man, and Cybernetics (SMC), IEEE, 2020. Castellana, Daniele; Bacciu, Davide Generalising Recursive Neural Models by Tensor Decomposition Conference Proceedings of the 2020 IEEE World Congress on Computational Intelligence, 2020. Cossu, Andrea; Carta, Antonio; Bacciu, Davide Continual Learning with Gated Incremental Memories for Sequential Data Processing Conference Proceedings of the 2020 IEEE World Congress on Computational Intelligence, 2020. Valenti, Andrea; Carta, Antonio; Bacciu, Davide Learning a Latent Space of Style-Aware Music Representations by Adversarial Autoencoders Conference Proceedings of the 24th European Conference on Artificial Intelligence (ECAI 2020), 2020. Carta, Antonio; Sperduti, Alessandro; Bacciu, Davide Incremental training of a recurrent neural network exploiting a multi-scale dynamic memory Conference Proceedings of the European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases 2020 (ECML-PKDD 2020), Springer International Publishing, 2020. Podda, Marco; Bacciu, Davide; Micheli, Alessio A Deep Generative Model for Fragment-Based Molecule Generation Conference Proceedings of the 23rd International Conference on Artificial Intelligence and Statistics (AISTATS 2020) , 2020. Errica, Federico; Podda, Marco; Bacciu, Davide; Micheli, Alessio A Fair Comparison of Graph Neural Networks for Graph Classification Conference Proceedings of the Eighth International Conference on Learning Representations (ICLR 2020), 2020. Podda, Marco; Micheli, Alessio; Bacciu, Davide; Milazzo, Paolo Biochemical Pathway Robustness Prediction with Graph Neural Networks Conference Proceedings of the European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN'20), 2020. Errica, Federico; Bacciu, Davide; Micheli, Alessio Theoretically Expressive and Edge-aware Graph Learning Conference Proceedings of the European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN'20), 2020. Crecchi, Francesco; de Bodt, Cyril; Bacciu, Davide; Verleysen, Michel; John, Lee Perplexity-free Parametric t-SNE Conference Proceedings of the European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN'20), 2020. Bacciu, Davide; Mandic, Danilo Tensor Decompositions in Deep Learning Conference Proceedings of the European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN'20), 2020. Castellana, Daniele; Bacciu, Davide Tensor Decompositions in Recursive Neural Networks for Tree-Structured Data Conference Proceedings of the European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN'20), 2020. Bacciu, Davide; Carta, Antonio Sequential Sentence Embeddings for Semantic Similarity Conference Proceedings of the 2019 IEEE Symposium Series on Computational Intelligence (SSCI'19), IEEE, 2019. Bacciu, Davide; Sotto, Luigi Di A non-negative factorization approach to node pooling in graph convolutional neural networks Conference Proceedings of the 18th International Conference of the Italian Association for Artificial Intelligence (AIIA 2019), Lecture Notes in Artificial Intelligence Springer-Verlag, 2019. Cafagna, Michele; Mattei, Lorenzo De; Bacciu, Davide; Nissim, Malvina Suitable doesn’t mean attractive. Human-based evaluation of automatically generated headlines Conference Proceedings of the 6th Italian Conference on Computational Linguistics (CLiC-it 2019), vol. 2481 , AI*IA series CEUR, 2019. Bacciu, Davide; Carta, Antonio; Sperduti, Alessandro Linear Memory Networks Conference Proceedings of the 28th International Conference on Artificial Neural Networks (ICANN 2019), , vol. 11727, Lecture Notes in Computer Science Springer-Verlag, 2019. Castellana, Daniele; Bacciu, Davide Bayesian Tensor Factorisation for Bottom-up Hidden Tree Markov Models Conference Proceedings of the 2019 International Joint Conference on Neural Networks (IJCNN 2019I) , IEEE, 2019. Crecchi, Francesco; Bacciu, Davide; Biggio, Battista Detecting Black-box Adversarial Examples through Nonlinear Dimensionality Reduction Conference Proceedings of the European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN'19), i6doc.com, Louvain-la-Neuve, Belgium, 2019. Bacciu, Davide; Micheli, Alessio; Podda, Marco Graph generation by sequential edge prediction Conference Proceedings of the European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN'19), i6doc.com, Louvain-la-Neuve, Belgium, 2019. Bacciu, Davide; Biggio, Battista; Crecchi, Francesco; Lisboa, Paulo J. G.; Martin, José D.; Oneto, Luca; Vellido, Alfredo Societal Issues in Machine Learning: When Learning from Data is Not Enough Conference Proceedings of the European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN'19), i6doc.com, Louvain-la-Neuve, Belgium, 2019. Davide, Bacciu; Antonio, Bruno Deep Tree Transductions - A Short Survey Conference Proceedings of the 2019 INNS Big Data and Deep Learning (INNSBDDL 2019) , Recent Advances in Big Data and Deep Learning Springer International Publishing, 2019. Davide, Bacciu; Antonio, Bruno Text Summarization as Tree Transduction by Top-Down TreeLSTM Conference Proceedings of the 2018 IEEE Symposium Series on Computational Intelligence (SSCI'18), IEEE, 2018. Davide, Bacciu; Daniele, Castellana Learning Tree Distributions by Hidden Markov Models Workshop Proceedings of the FLOC 2018 Workshop on Learning and Automata (LearnAut'18), 2018. Davide, Bacciu; Federico, Errica; Alessio, Micheli Contextual Graph Markov Model: A Deep and Generative Approach to Graph Processing Conference Proceedings of the 35th International Conference on Machine Learning (ICML 2018), 2018. Davide, Bacciu; Andrea, Bongiorno Concentric ESN: Assessing the Effect of Modularity in Cycle Reservoirs Conference Proceedings of the 2018 International Joint Conference on Neural Networks (IJCNN 2018) , IEEE, 2018. Davide, Bacciu; Daniele, Castellana Mixture of Hidden Markov Models as Tree Encoder Conference Proceedings of the European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN'18), i6doc.com, Louvain-la-Neuve, Belgium, 2018, ISBN: 978-287587047-6. Davide, Bacciu; JG, Lisboa Paulo; D, Martin Jose; Ruxandra, Stoean; Alfredo, Vellido Bioinformatics and medicine in the era of deep learning Conference Proceedings of the European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN'18), i6doc.com, Louvain-la-Neuve, Belgium, 2018, ISBN: 978-287587047-6. Davide, Bacciu Hidden Tree Markov Networks: Deep and Wide Learning for Structured Data Conference Proc. of the 2017 IEEE Symposium Series on Computational Intelligence (SSCI'17), IEEE, 2017. Davide, Bacciu; Stefano, Chessa; Claudio, Gallicchio; Alessio, Micheli On the Need of Machine Learning as a Service for the Internet of Things Conference To appear in the Proc. of the International Conference on Internet of Things and Machine Learning (IML 2017), International Conference Proceedings Series (ICPS) ACM, 2017, ISBN: 978-1-4503-5243-7. Davide, Bacciu; Francesco, Crecchi; Davide, Morelli DropIn: Making Neural Networks Robust to Missing Inputs by Dropout Conference Proceedings of the 2017 International Joint Conference on Neural Networks (IJCNN 2017) , IEEE, 2017, ISBN: 978-1-5090-6182-2. Davide, Bacciu; Michele, Colombo; Davide, Morelli; David, Plans ELM Preference Learning for Physiological Data Conference Proceedings of the European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN'17), i6doc.com, Louvain-la-Neuve, Belgium, 2017, ISBN: 978-2-875870384. Davide, Bacciu; Stefano, Chessa; Erina, Ferro; Luigi, Fortunati; Claudio, Gallicchio; Davide, La Rosa; Miguel, Llorente; Alessio, Micheli; Filippo, Palumbo; Oberdan, Parodi; Andrea, Valenti; Federico, Vozzi Detecting socialization events in ageing people: the experienze of the DOREMI project Conference Proceedings of the IEEE 12th International Conference on Intelligent Environments (IE 2016), , IEEE, UK, London, 2016, ISSN: 2472-7571 . Davide, Bacciu; Vincenzo, Gervasi; Giuseppe, Prencipe An Investigation into Cybernetic Humor, or: Can Machines Laugh? Conference Proceedings of the 8th International Conference on Fun with Algorithms (FUN'16) , vol. 49, Leibniz International Proceedings in Informatics (LIPIcs) Schloss Dagstuhl-Leibniz-Zentrum fuer Informatik, 2016, ISSN: 1868-8969.2021
@workshop{Macher2021b,
title = {Dependable Integration Concepts for Human-Centric AI-based Systems},
author = {Georg Macher and Eric Armengaud and Davide Bacciu and Jürgen Dobaj and Maid Dzambic and Matthias Seidl and Omar Veledar},
year = {2021},
date = {2021-09-07},
booktitle = {Proceedings of the 16th International Workshop on Dependable Smart Embedded Cyber-Physical Systems and Systems-of-Systems (DECSoS 2021)},
abstract = {The rising demand to integrate adaptive, cloud-based and/or AI-based systems is also increasing the need for associated dependability concepts. However, the practical processes and methods covering the whole life cycle still need to be instantiated. The assurance of dependability continues to be an open issue with no common solution. That is especially the case for novel AI and/or dynamical runtime-based approaches. This work focuses on engineering methods and design patterns that support the development of dependable AI-based autonomous systems. The paper presents the related body of knowledge of the TEACHING project and multiple automotive domain regulation activities and industrial working groups. It also considers the dependable architectural concepts and their impactful applicability to different scenarios to ensure the dependability of AI-based Cyber-Physical Systems of Systems (CPSoS) in the automotive domain. The paper shines the light on potential paths for dependable integration of AI-based systems into the automotive domain through identified analysis methods and targets. },
keywords = {},
pubstate = {published},
tppubtype = {workshop}
}
@conference{Bacciu2021d,
title = {TEACHING - Trustworthy autonomous cyber-physical applications through human-centred intelligence},
editor = {Davide Bacciu and Siranush Akarmazyan and Eric Armengaud and Manlio Bacco and George Bravos and Calogero Calandra and Emanuele Carlini and Antonio Carta and Pietro Cassara and Massimo Coppola and Charalampos Davalas and Patrizio Dazzi and Maria Carmela Degennaro and Daniele Di Sarli and Jürgen Dobaj and Claudio Gallicchio and Sylvain Girbal and Alberto Gotta and Riccardo Groppo and Vincenzo Lomonaco and Georg Macher and Daniele Mazzei and Gabriele Mencagli and Dimitrios Michail and Alessio Micheli and Roberta Peroglio and Salvatore Petroni and Rosaria Potenza and Farank Pourdanesh and Christos Sardianos and Konstantinos Tserpes and Fulvio Tagliabò and Jakob Valtl and Iraklis Varlamis and Omar Veledar},
doi = {10.1109/COINS51742.2021.9524099},
year = {2021},
date = {2021-08-23},
urldate = {2021-08-23},
booktitle = {Proceedings of the 2021 IEEE International Conference on Omni-Layer Intelligent Systems (COINS) },
abstract = {This paper discusses the perspective of the H2020 TEACHING project on the next generation of autonomous applications running in a distributed and highly heterogeneous environment comprising both virtual and physical resources spanning the edge-cloud continuum. TEACHING puts forward a human-centred vision leveraging the physiological, emotional, and cognitive state of the users as a driver for the adaptation and optimization of the autonomous applications. It does so by building a distributed, embedded and federated learning system complemented by methods and tools to enforce its dependability, security and privacy preservation. The paper discusses the main concepts of the TEACHING approach and singles out the main AI-related research challenges associated with it. Further, we provide a discussion of the design choices for the TEACHING system to tackle the aforementioned challenges},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
@workshop{Rosasco2021,
title = {Distilled Replay: Overcoming Forgetting through Synthetic Samples},
author = {Andrea Rosasco and Antonio Carta and Andrea Cossu and Vincenzo Lomonaco and Davide Bacciu},
url = {https://arxiv.org/abs/2103.15851, Arxiv},
year = {2021},
date = {2021-08-19},
urldate = {2021-08-19},
booktitle = {IJCAI 2021 workshop on continual semi-supervised learning (CSSL 2021) },
abstract = {Replay strategies are Continual Learning techniques which mitigate catastrophic forgetting by keeping a buffer of patterns from previous experience, which are interleaved with new data during training. The amount of patterns stored in the buffer is a critical parameter which largely influences the final performance and the memory footprint of the approach. This work introduces Distilled Replay, a novel replay strategy for Continual Learning which is able to mitigate forgetting by keeping a very small buffer (up to pattern per class) of highly informative samples. Distilled Replay builds the buffer through a distillation process which compresses a large dataset into a tiny set of informative examples. We show the effectiveness of our Distilled Replay against naive replay, which randomly samples patterns from the dataset, on four popular Continual Learning benchmarks.},
keywords = {},
pubstate = {published},
tppubtype = {workshop}
}
@conference{Atzeni2021,
title = { Modeling Edge Features with Deep Bayesian Graph Networks},
author = {Daniele Atzeni and Davide Bacciu and Federico Errica and Alessio Micheli},
doi = {10.1109/IJCNN52387.2021.9533430},
year = {2021},
date = {2021-07-18},
urldate = {2021-07-18},
booktitle = {Proceedings of the International Joint Conference on Neural Networks (IJCNN 2021)},
publisher = {IEEE},
organization = {IEEE},
abstract = {We propose an extension of the Contextual Graph Markov Model, a deep and probabilistic machine learning model for graphs, to model the distribution of edge features. Our approach is architectural, as we introduce an additional Bayesian network mapping edge features into discrete states to be used by the original model. In doing so, we are also able to build richer graph representations even in the absence of edge features, which is confirmed by the performance improvements on standard graph classification benchmarks. Moreover, we successfully test our proposal in a graph regression scenario where edge features are of fundamental importance, and we show that the learned edge representation provides substantial performance improvements against the original model on three link prediction tasks. By keeping the computational complexity linear in the number of edges, the proposed model is amenable to large-scale graph processing.},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
@conference{Numeroso2021,
title = {MEG: Generating Molecular Counterfactual Explanations for Deep Graph Networks},
author = {Danilo Numeroso and Davide Bacciu},
year = {2021},
date = {2021-07-18},
urldate = {2021-07-18},
booktitle = {Proceedings of the International Joint Conference on Neural Networks (IJCNN 2021)},
organization = {IEEE},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
@conference{BacciuIJCNN2021,
title = {Federated Reservoir Computing Neural Networks},
author = {Davide Bacciu and Daniele Di Sarli and Pouria Faraji and Claudio Gallicchio and Alessio Micheli},
doi = {10.1109/IJCNN52387.2021.9534035},
year = {2021},
date = {2021-07-18},
urldate = {2021-07-18},
booktitle = {Proceedings of the International Joint Conference on Neural Networks (IJCNN 2021)},
publisher = {IEEE},
abstract = {A critical aspect in Federated Learning is the aggregation strategy for the combination of multiple models, trained on the edge, into a single model that incorporates all the knowledge in the federation. Common Federated Learning approaches for Recurrent Neural Networks (RNNs) do not provide guarantees on the predictive performance of the aggregated model. In this paper we show how the use of Echo State Networks (ESNs), which are efficient state-of-the-art RNN models for time-series processing, enables a form of federation that is optimal in the sense that it produces models mathematically equivalent to the corresponding centralized model. Furthermore, the proposed method is compliant with privacy constraints. The proposed method, which we denote as Incremental Federated Learning, is experimentally evaluated against an averaging strategy on two datasets for human state and activity recognition.},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
@conference{BacciuPoddaIJCNN2021,
title = {GraphGen-Redux: a Fast and Lightweight Recurrent Model for Labeled Graph Generation},
author = {Davide Bacciu and Marco Podda},
doi = {10.1109/IJCNN52387.2021.9533743},
year = {2021},
date = {2021-07-18},
urldate = {2021-07-18},
booktitle = {Proceedings of the International Joint Conference on Neural Networks (IJCNN 2021)},
organization = {IEEE},
abstract = {The problem of labeled graph generation is gaining attention in the Deep Learning community. The task is challenging due to the sparse and discrete nature of graph spaces. Several approaches have been proposed in the literature, most of which require to transform the graphs into sequences that encode their structure and labels and to learn the distribution of such sequences through an auto-regressive generative model. Among this family of approaches, we focus on the Graphgen model. The preprocessing phase of Graphgen transforms graphs into unique edge sequences called Depth-First Search (DFS) codes, such that two isomorphic graphs are assigned the same DFS code. Each element of a DFS code is associated with a graph edge: specifically, it is a quintuple comprising one node identifier for each of the two endpoints, their node labels, and the edge label. Graphgen learns to generate such sequences auto-regressively and models the probability of each component of the quintuple independently. While effective, the independence assumption made by the model is too loose to capture the complex label dependencies of real-world graphs precisely. By introducing a novel graph preprocessing approach, we are able to process the labeling information of both nodes and edges jointly. The corresponding model, which we term Graphgen-redux, improves upon the generative performances of Graphgen in a wide range of datasets of chemical and social graphs. In addition, it uses approximately 78% fewer parameters than the vanilla variant and requires 50% fewer epochs of training on average.},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
@conference{Errica2021,
title = {Graph Mixture Density Networks},
author = {Federico Errica and Davide Bacciu and Alessio Micheli},
url = {https://proceedings.mlr.press/v139/errica21a.html, PDF},
year = {2021},
date = {2021-07-18},
urldate = {2021-07-18},
booktitle = {Proceedings of the 38th International Conference on Machine Learning (ICML 2021)},
pages = {3025-3035},
publisher = {PMLR},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
@workshop{lomonaco2021avalanche,
title = {Avalanche: an End-to-End Library for Continual Learning},
author = {Vincenzo Lomonaco and Lorenzo Pellegrini and Andrea Cossu and Antonio Carta and Gabriele Graffieti and Tyler L Hayes and Matthias De Lange and Marc Masana and Jary Pomponi and Gido van de Ven and Martin Mundt and Qi She and Keiland Cooper and Jeremy Forest and Eden Belouadah and Simone Calderara and German I Parisi and Fabio Cuzzolin and Andreas Tolias and Simone Scardapane and Luca Antiga and Subutai Amhad and Adrian Popescu and Christopher Kanan and Joost van de Weijer and Tinne Tuytelaars and Davide Bacciu and Davide Maltoni},
url = {https://arxiv.org/abs/2104.00405, Arxiv},
year = {2021},
date = {2021-06-19},
urldate = {2021-06-19},
booktitle = {Proceedings of the CVPR 2021 Workshop on Continual Learning },
pages = {3600-3610},
publisher = {IEEE},
keywords = {},
pubstate = {published},
tppubtype = {workshop}
}
@conference{Sattar2021,
title = {Context-aware Graph Convolutional Autoencoder},
author = {Asma Sattar and Davide Bacciu
},
doi = {10.1007/978-3-030-85030-2_23},
year = {2021},
date = {2021-06-16},
urldate = {2021-06-16},
booktitle = {Proceedings of the 16th International Work Conference on Artificial Neural Networks (IWANN 2021)},
volume = {12862},
pages = { 279-290},
publisher = {Springer},
series = {LNCS},
abstract = {Recommendation problems can be addressed as link prediction tasks in a bipartite graph between user and item nodes, labelled with rating on edges. Existing matrix completion approaches model the user’s opinion on items by ignoring context information that can instead be associated with the edges of the bipartite graph. Context is an important factor to be considered as it heavily affects opinions and preferences. Following this line of research, this paper proposes a graph convolutional auto-encoder approach which considers users’ opinion on items as well as the static node features and context information on edges. Our graph encoder produces a representation of users and items from the perspective of context, static features, and rating opinion. The empirical analysis on three real-world datasets shows that the proposed approach outperforms recent state-of-the-art recommendation systems.},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
@conference{Bacciu2021,
title = {Benchmarking Reservoir and Recurrent Neural Networks for Human State and Activity Recognition},
author = {Davide Bacciu and Daniele Di Sarli and Claudio Gallicchio and Alessio Micheli and Niccolo Puccinelli},
doi = {10.1007/978-3-030-85099-9_14},
year = {2021},
date = {2021-06-16},
urldate = {2021-06-16},
booktitle = {Proceedings of the 16th International Work Conference on Artificial Neural Networks (IWANN 2021)},
volume = {12862},
pages = {168-179},
publisher = {Springer},
abstract = {Monitoring of human states from streams of sensor data is an appealing applicative area for Recurrent Neural Network (RNN) models. In such a scenario, Echo State Network (ESN) models from the Reservoir Computing paradigm can represent good candidates due to the efficient training algorithms, which, compared to fully trainable RNNs, definitely ease embedding on edge devices.
In this paper, we provide an experimental analysis aimed at assessing the performance of ESNs on tasks of human state and activity recognition, in both shallow and deep setups. Our analysis is conducted in comparison with vanilla RNNs, Long Short-Term Memory, Gated Recurrent Units, and their deep variations. Our empirical results on several datasets clearly indicate that, despite their simplicity, ESNs are able to achieve a level of accuracy that is competitive with those models that require full adaptation of the parameters. From a broader perspective, our analysis also points out that recurrent networks can be a first choice for the class of tasks under consideration, in particular in their deep and gated variants.},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
In this paper, we provide an experimental analysis aimed at assessing the performance of ESNs on tasks of human state and activity recognition, in both shallow and deep setups. Our analysis is conducted in comparison with vanilla RNNs, Long Short-Term Memory, Gated Recurrent Units, and their deep variations. Our empirical results on several datasets clearly indicate that, despite their simplicity, ESNs are able to achieve a level of accuracy that is competitive with those models that require full adaptation of the parameters. From a broader perspective, our analysis also points out that recurrent networks can be a first choice for the class of tasks under consideration, in particular in their deep and gated variants.@workshop{Carta2021,
title = { Catastrophic Forgetting in Deep Graph Networks: an Introductory Benchmark for Graph Classification },
author = {Antonio Carta and Andrea Cossu and Federico Errica and Davide Bacciu},
year = {2021},
date = {2021-04-12},
urldate = {2021-04-12},
booktitle = {The Web Conference 2021 Workshop on Graph Learning Benchmarks (GLB21)},
abstract = {In this work, we study the phenomenon of catastrophic forgetting in the graph representation learning scenario. The primary objective of the analysis is to understand whether classical continual learning techniques for flat and sequential data have a tangible impact on performances when applied to graph data. To do so, we experiment with a structure-agnostic model and a deep graph network in a robust and controlled environment on three different datasets. The benchmark is complemented by an investigation on the effect of structure-preserving regularization techniques on catastrophic forgetting. We find that replay is the most effective strategy in so far, which also benefits the most from the use of regularization. Our findings suggest interesting future research at the intersection of the continual and graph representation learning fields. Finally, we provide researchers with a flexible software framework to reproduce our results and carry out further experiments.},
keywords = {},
pubstate = {published},
tppubtype = {workshop}
}
2020
@workshop{tomographyNeurips2020,
title = {Generative Tomography Reconstruction},
author = {Matteo Ronchetti and Davide Bacciu},
url = {https://arxiv.org/pdf/2010.14933.pdf, PDF},
year = {2020},
date = {2020-12-11},
urldate = {2020-12-11},
booktitle = {34th Conference on Neural Information Processing Systems (NeurIPS 2020), Workshop on Deep Learning and Inverse Problems},
abstract = {We propose an end-to-end differentiable architecture for tomography reconstruc-1tion that directly maps a noisy sinogram into a denoised reconstruction. Compared2to existing approaches our end-to-end architecture produces more accurate recon-3structions while using less parameters and time. We also propose a generative4model that, given a noisy sinogram, can sample realistic reconstructions. This5generative model can be used as prior inside an iterative process that, by tak-6ing into consideration the physical model, can reduce artifacts and errors in the7reconstructions.},
keywords = {},
pubstate = {published},
tppubtype = {workshop}
}
@workshop{kplexWS2020,
title = {K-plex Cover Pooling for Graph Neural Networks},
author = {Davide Bacciu and Alessio Conte and Roberto Grossi and Francesco Landolfi and Andrea Marino},
year = {2020},
date = {2020-12-11},
urldate = {2020-12-11},
booktitle = {34th Conference on Neural Information Processing Systems (NeurIPS 2020), Workshop on Learning Meets Combinatorial Algorithms},
abstract = {We introduce a novel pooling technique which borrows from classical results in graph theory that is non-parametric and generalizes well to graphs of different nature and connectivity pattern. Our pooling method, named KPlexPool, builds on the concepts of graph covers and $k$-plexes, i.e. pseudo-cliques where each node can miss up to $k$ links. The experimental evaluation on molecular and social graph classification shows that KPlexPool achieves state of the art performances, supporting the intuition that well-founded graph-theoretic approaches can be effectively integrated in learning models for graphs. },
keywords = {},
pubstate = {published},
tppubtype = {workshop}
}
@workshop{megWS2020,
title = {Explaining Deep Graph Networks with Molecular Counterfactuals},
author = {Davide Bacciu and Danilo Numeroso},
url = {https://arxiv.org/pdf/2011.05134.pdf, Arxiv},
year = {2020},
date = {2020-12-11},
urldate = {2020-12-11},
booktitle = {34th Conference on Neural Information Processing Systems (NeurIPS 2020), Workshop on Machine Learning for Molecules - Accepted as Contributed Talk (Oral)},
abstract = {We present a novel approach to tackle explainability of deep graph networks in the context of molecule property prediction tasks, named MEG (Molecular Explanation Generator). We generate informative counterfactual explanations for a specific prediction under the form of (valid) compounds with high structural similarity and different predicted properties. We discuss preliminary results showing how the model can convey non-ML experts with key insights into the learning model focus in the neighborhood of a molecule. },
keywords = {},
pubstate = {published},
tppubtype = {workshop}
}
@workshop{CartaNeuripsWS2020,
title = { Short-Term Memory Optimization in Recurrent Neural Networks by Autoencoder-based Initialization },
author = {Antonio Carta and Alessandro Sperduti and Davide Bacciu
},
url = {https://arxiv.org/abs/2011.02886, Arxiv},
year = {2020},
date = {2020-12-11},
urldate = {2020-12-11},
booktitle = {34th Conference on Neural Information Processing Systems (NeurIPS 2020), Workshop on Beyond BackPropagation: Novel Ideas for Training Neural Architectures},
abstract = {Training RNNs to learn long-term dependencies is difficult due to vanishing gradients. We explore an alternative solution based on explicit memorization using linear autoencoders for sequences, which allows to maximize the short-term memory and that can be solved with a closed-form solution without backpropagation. We introduce an initialization schema that pretrains the weights of a recurrent neural network to approximate the linear autoencoder of the input sequences and we show how such pretraining can better support solving hard classification tasks with long sequences. We test our approach on sequential and permuted MNIST. We show that the proposed approach achieves a much lower reconstruction error for long sequences and a better gradient propagation during the finetuning phase. },
keywords = {},
pubstate = {published},
tppubtype = {workshop}
}
@conference{CastellanaCOLING2020,
title = {Learning from Non-Binary Constituency Trees via Tensor Decomposition},
author = {Daniele Castellana and Davide Bacciu},
year = {2020},
date = {2020-12-08},
urldate = {2020-12-08},
booktitle = {PROCEEDINGS OF THE 2020 INTERNATIONAL CONFERENCE ON COMPUTATIONAL LINGUISTICS (COLING 2020)},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
@conference{smc2020,
title = {ROS-Neuro Integration of Deep Convolutional Autoencoders for EEG Signal Compression in Real-time BCIs},
author = {Andrea Valenti and Michele Barsotti and Raffaello Brondi and Davide Bacciu and Luca Ascari},
url = {https://arxiv.org/abs/2008.13485, Arxiv},
year = {2020},
date = {2020-10-11},
urldate = {2020-10-11},
booktitle = {Proceedings of the 2020 IEEE International Conference on Systems, Man, and Cybernetics (SMC)},
publisher = {IEEE},
abstract = { Typical EEG-based BCI applications require the computation of complex functions over the noisy EEG channels to be carried out in an efficient way. Deep learning algorithms are capable of learning flexible nonlinear functions directly from data, and their constant processing latency is perfect for their deployment into online BCI systems. However, it is crucial for the jitter of the processing system to be as low as possible, in order to avoid unpredictable behaviour that can ruin the system's overall usability. In this paper, we present a novel encoding method, based on on deep convolutional autoencoders, that is able to perform efficient compression of the raw EEG inputs. We deploy our model in a ROS-Neuro node, thus making it suitable for the integration in ROS-based BCI and robotic systems in real world scenarios. The experimental results show that our system is capable to generate meaningful compressed encoding preserving to original information contained in the raw input. They also show that the ROS-Neuro node is able to produce such encodings at a steady rate, with minimal jitter. We believe that our system can represent an important step towards the development of an effective BCI processing pipeline fully standardized in ROS-Neuro framework. },
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
@conference{Wcci20Tensor,
title = {Generalising Recursive Neural Models by Tensor Decomposition},
author = {Daniele Castellana and Davide Bacciu},
url = {https://arxiv.org/abs/2006.10021, Arxiv},
year = {2020},
date = {2020-07-19},
urldate = {2020-07-19},
booktitle = {Proceedings of the 2020 IEEE World Congress on Computational Intelligence},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
@conference{Wcci20CL,
title = {Continual Learning with Gated Incremental Memories for Sequential Data Processing},
author = {Andrea Cossu and Antonio Carta and Davide Bacciu},
url = {https://arxiv.org/pdf/2004.04077.pdf, Arxiv},
doi = {10.1109/IJCNN48605.2020.9207550},
year = {2020},
date = {2020-07-19},
urldate = {2020-07-19},
booktitle = {Proceedings of the 2020 IEEE World Congress on Computational Intelligence},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
@conference{ecai2020,
title = { Learning a Latent Space of Style-Aware Music Representations by Adversarial Autoencoders},
author = {Andrea Valenti and Antonio Carta and Davide Bacciu},
url = {https://arxiv.org/abs/2001.05494},
year = {2020},
date = {2020-06-08},
booktitle = {Proceedings of the 24th European Conference on Artificial Intelligence (ECAI 2020)},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
@conference{ecml2020LMN,
title = {Incremental training of a recurrent neural network exploiting a multi-scale dynamic memory},
author = {Antonio Carta and Alessandro Sperduti and Davide Bacciu},
year = {2020},
date = {2020-06-05},
urldate = {2020-06-05},
booktitle = {Proceedings of the European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases 2020 (ECML-PKDD 2020)},
publisher = {Springer International Publishing},
abstract = {The effectiveness of recurrent neural networks can be largely influenced by their ability to store into their dynamical memory information extracted from input sequences at different frequencies and timescales. Such a feature can be introduced into a neural architecture by an appropriate modularization of the dynamic memory. In this paper we propose a novel incrementally trained recurrent architecture targeting explicitly multi-scale learning. First, we show how to extend the architecture of a simple RNN by separating its hidden state into different modules, each subsampling the network hidden activations at different frequencies. Then, we discuss a training algorithm where new modules are iteratively added to the model to learn progressively longer dependencies. Each new module works at a slower frequency than the previous ones and it is initialized to encode the subsampled sequence of hidden activations. Experimental results on synthetic and real-world datasets on speech recognition and handwritten characters show that the modular architecture and the incremental training algorithm improve the ability of recurrent neural networks to capture long-term dependencies.},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
@conference{aistats2020,
title = {A Deep Generative Model for Fragment-Based Molecule Generation},
author = {Marco Podda and Davide Bacciu and Alessio Micheli},
url = {https://arxiv.org/abs/2002.12826},
year = {2020},
date = {2020-06-03},
urldate = {2020-06-03},
booktitle = {Proceedings of the 23rd International Conference on Artificial Intelligence and Statistics (AISTATS 2020) },
abstract = {Molecule generation is a challenging open problem in cheminformatics. Currently, deep generative approaches addressing the challenge belong to two broad categories, differing in how molecules are represented. One approach encodes molecular graphs as strings of text, and learn their corresponding character-based language model. Another, more expressive, approach operates directly on the molecular graph. In this work, we address two limitations of the former: generation of invalid or duplicate molecules. To improve validity rates, we develop a language model for small molecular substructures called fragments, loosely inspired by the well-known paradigm of Fragment-Based Drug Design. In other words, we generate molecules fragment by fragment, instead of atom by atom. To improve uniqueness rates, we present a frequency-based clustering strategy that helps to generate molecules with infrequent fragments. We show experimentally that our model largely outperforms other language model-based competitors, reaching state-of-the-art performances typical of graph-based approaches. Moreover, generated molecules display molecular properties similar to those in the training sample, even in absence of explicit task-specific supervision.},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
@conference{iclr19,
title = {A Fair Comparison of Graph Neural Networks for Graph Classification},
author = {Federico Errica and Marco Podda and Davide Bacciu and Alessio Micheli},
url = {https://openreview.net/pdf?id=HygDF6NFPB, PDF
https://iclr.cc/virtual_2020/poster_HygDF6NFPB.html, Talk
https://github.com/diningphil/gnn-comparison, Code},
year = {2020},
date = {2020-04-30},
booktitle = {Proceedings of the Eighth International Conference on Learning Representations (ICLR 2020)},
abstract = {Experimental reproducibility and replicability are critical topics in machine learning. Authors have often raised concerns about their lack in scientific publications to improve the quality of the field. Recently, the graph representation learning field has attracted the attention of a wide research community, which resulted in a large stream of works.
As such, several Graph Neural Network models have been developed to effectively tackle graph classification. However, experimental procedures often lack rigorousness and are hardly reproducible. Motivated by this, we provide an overview of common practices that should be avoided to fairly compare with the state of the art. To counter this troubling trend, we ran more than 47000 experiments in a controlled and uniform framework to re-evaluate five popular models across nine common benchmarks. Moreover, by comparing GNNs with structure-agnostic baselines we provide convincing evidence that, on some datasets, structural information has not been exploited yet. We believe that this work can contribute to the development of the graph learning field, by providing a much needed grounding for rigorous evaluations of graph classification models.},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
As such, several Graph Neural Network models have been developed to effectively tackle graph classification. However, experimental procedures often lack rigorousness and are hardly reproducible. Motivated by this, we provide an overview of common practices that should be avoided to fairly compare with the state of the art. To counter this troubling trend, we ran more than 47000 experiments in a controlled and uniform framework to re-evaluate five popular models across nine common benchmarks. Moreover, by comparing GNNs with structure-agnostic baselines we provide convincing evidence that, on some datasets, structural information has not been exploited yet. We believe that this work can contribute to the development of the graph learning field, by providing a much needed grounding for rigorous evaluations of graph classification models.@conference{esann20Podda,
title = { Biochemical Pathway Robustness Prediction with Graph Neural Networks },
author = {Marco Podda and Alessio Micheli and Davide Bacciu and Paolo Milazzo},
editor = {Michel Verleysen},
year = {2020},
date = {2020-04-21},
booktitle = {Proceedings of the European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN'20)},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
@conference{esann20Errica,
title = { Theoretically Expressive and Edge-aware Graph Learning },
author = {Federico Errica and Davide Bacciu and Alessio Micheli},
editor = {Michel Verleysen},
url = {https://arxiv.org/abs/2001.09005},
year = {2020},
date = {2020-04-21},
booktitle = {Proceedings of the European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN'20)},
abstract = {We propose a new Graph Neural Network that combines recent advancements in the field. We give theoretical contributions by proving that the model is strictly more general than the Graph Isomorphism Network and the Gated Graph Neural Network, as it can approximate the same functions and deal with arbitrary edge values. Then, we show how a single node information can flow through the graph unchanged. },
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
@conference{esann20Crecchi,
title = { Perplexity-free Parametric t-SNE},
author = {Francesco Crecchi and Cyril de Bodt and Davide Bacciu and Michel Verleysen and Lee John},
editor = {Michel Verleysen},
year = {2020},
date = {2020-04-21},
booktitle = {Proceedings of the European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN'20)},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
@conference{esann20Tutorial,
title = {Tensor Decompositions in Deep Learning},
author = {Davide Bacciu and Danilo Mandic},
editor = {Michel Verleysen},
url = {https://arxiv.org/abs/2002.11835},
year = {2020},
date = {2020-04-21},
booktitle = {Proceedings of the European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN'20)},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
@conference{esann20Castellana,
title = { Tensor Decompositions in Recursive Neural Networks for Tree-Structured Data },
author = {Daniele Castellana and Davide Bacciu},
editor = {Michel Verleysen},
url = {https://arxiv.org/pdf/2006.10619.pdf, Arxiv},
year = {2020},
date = {2020-04-21},
booktitle = {Proceedings of the European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN'20)},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
2019
@conference{ssci19,
title = {Sequential Sentence Embeddings for Semantic Similarity},
author = {Davide Bacciu and Antonio Carta},
doi = {10.1109/SSCI44817.2019.9002824},
year = {2019},
date = {2019-12-06},
urldate = {2019-12-06},
booktitle = {Proceedings of the 2019 IEEE Symposium Series on Computational Intelligence (SSCI'19)},
publisher = {IEEE},
abstract = { Sentence embeddings are distributed representations of sentences intended to be general features to be effectively used as input for deep learning models across different natural language processing tasks.
State-of-the-art sentence embeddings for semantic similarity are computed with a weighted average of pretrained word embeddings, hence completely ignoring the contribution of word ordering within a sentence in defining its semantics. We propose a novel approach to compute sentence embeddings for semantic similarity that exploits a linear autoencoder for sequences. The method can be trained in closed form and it is easy to fit on unlabeled sentences. Our method provides a grounded approach to identify and subtract common discourse from a sentence and its embedding, to remove associated uninformative features. Unlike similar methods in the literature (e.g. the popular Smooth Inverse Frequency approach), our method is able to account for word order. We show that our estimate of the common discourse vector improves the results on two different semantic similarity benchmarks when compared to related approaches from the literature.},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
State-of-the-art sentence embeddings for semantic similarity are computed with a weighted average of pretrained word embeddings, hence completely ignoring the contribution of word ordering within a sentence in defining its semantics. We propose a novel approach to compute sentence embeddings for semantic similarity that exploits a linear autoencoder for sequences. The method can be trained in closed form and it is easy to fit on unlabeled sentences. Our method provides a grounded approach to identify and subtract common discourse from a sentence and its embedding, to remove associated uninformative features. Unlike similar methods in the literature (e.g. the popular Smooth Inverse Frequency approach), our method is able to account for word order. We show that our estimate of the common discourse vector improves the results on two different semantic similarity benchmarks when compared to related approaches from the literature.@conference{aiia2019,
title = {A non-negative factorization approach to node pooling in graph convolutional neural networks},
author = {Davide Bacciu and Luigi {Di Sotto}},
url = {https://arxiv.org/pdf/1909.03287.pdf},
year = {2019},
date = {2019-11-22},
booktitle = {Proceedings of the 18th International Conference of the Italian Association for Artificial Intelligence (AIIA 2019)},
publisher = {Springer-Verlag},
series = {Lecture Notes in Artificial Intelligence},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
@conference{clic2019,
title = {Suitable doesn’t mean attractive. Human-based evaluation of automatically generated headlines},
author = {Michele Cafagna and Lorenzo {De Mattei} and Davide Bacciu and Malvina Nissim},
editor = {Raffaella Bernardi and Roberto Navigli and Giovanni Semeraro},
url = {http://ceur-ws.org/Vol-2481/paper13.pdf},
year = {2019},
date = {2019-11-15},
urldate = {2019-11-15},
booktitle = {Proceedings of the 6th Italian Conference on Computational Linguistics (CLiC-it 2019)},
volume = {2481 },
publisher = {CEUR},
series = {AI*IA series},
abstract = {We train three different models to generate newspaper headlines from a portion of the corresponding article. The articles are obtained from two mainstream Italian newspapers. In order to assess the models’ performance, we set up a human-based evaluation where 30 different native speakers expressed their judgment over a variety of aspects. The outcome shows that (i) pointer networks perform better than standard sequence to sequence models, creating mostly correct and appropriate titles; (ii) the suitability of a headline to its article for pointer networks is on par or better than the gold headline; (iii) gold headlines are still by far more inviting than generated headlines to read the whole article, highlighting the contrast between human creativity and content appropriateness.},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
@conference{lmnArx18,
title = {Linear Memory Networks},
author = {Davide Bacciu and Antonio Carta and Alessandro Sperduti},
url = {https://arxiv.org/pdf/1811.03356.pdf},
doi = {10.1007/978-3-030-30487-4_40},
year = {2019},
date = {2019-09-17},
urldate = {2019-09-17},
booktitle = {Proceedings of the 28th International Conference on Artificial Neural Networks (ICANN 2019), },
volume = {11727},
pages = {513-525 },
publisher = {Springer-Verlag},
series = {Lecture Notes in Computer Science},
abstract = {Recurrent neural networks can learn complex transduction problems that require maintaining and actively exploiting a memory of their inputs. Such models traditionally consider memory and input-output functionalities indissolubly entangled. We introduce a novel recurrent architecture based on the conceptual separation between the functional input-output transformation and the memory mechanism, showing how they can be implemented through different neural components. By building on such conceptualization, we introduce the Linear Memory Network, a recurrent model comprising a feedforward neural network, realizing the non-linear functional transformation, and a linear autoencoder for sequences, implementing the memory component. The resulting architecture can be efficiently trained by building on closed-form solutions to linear optimization problems. Further, by exploiting equivalence results between feedforward and recurrent neural networks we devise a pretraining schema for the proposed architecture. Experiments on polyphonic music datasets show competitive results against gated recurrent networks and other state of the art models. },
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
@conference{ijcnn2019,
title = {Bayesian Tensor Factorisation for Bottom-up Hidden Tree Markov Models},
author = {Daniele Castellana and Davide Bacciu},
url = {https://arxiv.org/pdf/1905.13528.pdf},
year = {2019},
date = {2019-07-15},
urldate = {2019-07-15},
booktitle = {Proceedings of the 2019 International Joint Conference on Neural Networks (IJCNN 2019I) },
publisher = {IEEE},
abstract = {Bottom-Up Hidden Tree Markov Model is a highly expressive model for tree-structured data. Unfortunately, it cannot be used in practice due to the intractable size of its state-transition matrix. We propose a new approximation which lies on the Tucker factorisation of tensors. The probabilistic interpretation of such approximation allows us to define a new probabilistic model for tree-structured data. Hence, we define the new approximated model and we derive its learning algorithm. Then, we empirically assess the effective power of the new model evaluating it on two different tasks. In both cases, our model outperforms the other approximated model known in the literature.},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
@conference{esann19Attacks,
title = {Detecting Black-box Adversarial Examples through Nonlinear Dimensionality Reduction},
author = {Francesco Crecchi and Davide Bacciu and Battista Biggio },
editor = {Michel Verleysen},
url = {https://arxiv.org/pdf/1904.13094.pdf},
year = {2019},
date = {2019-04-24},
booktitle = {Proceedings of the European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN'19)},
publisher = {i6doc.com},
address = {Louvain-la-Neuve, Belgium},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
@conference{esann19GraphGen,
title = {Graph generation by sequential edge prediction},
author = {Davide Bacciu and Alessio Micheli and Marco Podda},
editor = {Michel Verleysen},
url = {https://www.elen.ucl.ac.be/Proceedings/esann/esannpdf/es2019-107.pdf},
year = {2019},
date = {2019-04-24},
booktitle = {Proceedings of the European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN'19)},
publisher = {i6doc.com},
address = {Louvain-la-Neuve, Belgium},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
@conference{esann19Tutorial,
title = {Societal Issues in Machine Learning: When Learning from Data is Not Enough},
author = { Davide Bacciu and Battista Biggio and Francesco Crecchi and Paulo J. G. Lisboa and José D. Martin and Luca Oneto and Alfredo Vellido},
editor = {Michel Verleysen},
url = {https://www.elen.ucl.ac.be/Proceedings/esann/esannpdf/es2019-6.pdf},
year = {2019},
date = {2019-04-24},
booktitle = {Proceedings of the European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN'19)},
publisher = {i6doc.com},
address = {Louvain-la-Neuve, Belgium},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
@conference{inns2019,
title = {Deep Tree Transductions - A Short Survey},
author = {Bacciu Davide and Bruno Antonio},
editor = {Luca Oneto and Nicol{`o} Navarin and Alessandro Sperduti and Davide Anguita},
url = {https://arxiv.org/abs/1902.01737},
doi = {10.1007/978-3-030-16841-4_25},
year = {2019},
date = {2019-01-04},
urldate = {2019-01-04},
booktitle = {Proceedings of the 2019 INNS Big Data and Deep Learning (INNSBDDL 2019) },
pages = {236--245},
publisher = {Springer International Publishing},
series = {Recent Advances in Big Data and Deep Learning},
abstract = {The paper surveys recent extensions of the Long-Short Term Memory networks to handle tree structures from the perspective of learning non-trivial forms of isomorph structured transductions. It provides a discussion of modern TreeLSTM models, showing the effect of the bias induced by the direction of tree processing. An empirical analysis is performed on real-world benchmarks, highlighting how there is no single model adequate to effectively approach all transduction problems.},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
2018
@conference{ssci2018,
title = {Text Summarization as Tree Transduction by Top-Down TreeLSTM},
author = {Bacciu Davide and Bruno Antonio},
url = {https://arxiv.org/abs/1809.09096},
doi = {10.1109/SSCI.2018.8628873},
year = {2018},
date = {2018-11-18},
urldate = {2018-11-18},
booktitle = {Proceedings of the 2018 IEEE Symposium Series on Computational Intelligence (SSCI'18)},
pages = {1411-1418},
publisher = {IEEE},
abstract = {Extractive compression is a challenging natural language processing problem. This work contributes by formulating neural extractive compression as a parse tree transduction problem, rather than a sequence transduction task. Motivated by this, we introduce a deep neural model for learning structure-to-substructure tree transductions by extending the standard Long Short-Term Memory, considering the parent-child relationships in the structural recursion. The proposed model can achieve state of the art performance on sentence compression benchmarks, both in terms of accuracy and compression rate. },
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
@workshop{learnaut18,
title = {Learning Tree Distributions by Hidden Markov Models},
author = {Bacciu Davide and Castellana Daniele},
editor = {Rémi Eyraud and Jeffrey Heinz and Guillaume Rabusseau and Matteo Sammartino },
url = {https://arxiv.org/abs/1805.12372},
year = {2018},
date = {2018-07-13},
booktitle = {Proceedings of the FLOC 2018 Workshop on Learning and Automata (LearnAut'18)},
keywords = {},
pubstate = {published},
tppubtype = {workshop}
}
@conference{icml2018,
title = {Contextual Graph Markov Model: A Deep and Generative Approach to Graph Processing},
author = {Bacciu Davide and Errica Federico and Micheli Alessio},
url = {https://arxiv.org/abs/1805.10636},
year = {2018},
date = {2018-07-11},
urldate = {2018-07-11},
booktitle = {Proceedings of the 35th International Conference on Machine Learning (ICML 2018)},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
@conference{ijcnn2018,
title = {Concentric ESN: Assessing the Effect of Modularity in Cycle Reservoirs},
author = {Bacciu Davide and Bongiorno Andrea},
url = {https://arxiv.org/abs/1805.09244},
year = {2018},
date = {2018-07-09},
urldate = {2018-07-09},
booktitle = {Proceedings of the 2018 International Joint Conference on Neural Networks (IJCNN 2018) },
pages = {1-9},
publisher = {IEEE},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
@conference{esann2018Tree,
title = {Mixture of Hidden Markov Models as Tree Encoder},
author = {Bacciu Davide and Castellana Daniele},
editor = {Michel Verleysen},
isbn = {978-287587047-6},
year = {2018},
date = {2018-04-26},
urldate = {2018-04-26},
booktitle = {Proceedings of the European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN'18)},
pages = {543-548},
publisher = {i6doc.com},
address = {Louvain-la-Neuve, Belgium},
abstract = {The paper introduces a new probabilistic tree encoder based on a mixture of Bottom-up Hidden Tree Markov Models. The ability to recognise similar structures in data is experimentally assessed both in clusterization and classification tasks. The results of these preliminary experiments suggest that the model can be successfully used to compress the tree structural and label patterns in a vectorial representation.},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
@conference{esann2018Tut,
title = {Bioinformatics and medicine in the era of deep learning},
author = {Bacciu Davide and Lisboa Paulo JG and Martin Jose D and Stoean Ruxandra and Vellido Alfredo},
editor = {Michel Verleysen},
url = {http://arxiv.org/abs/1802.09791},
isbn = {978-287587047-6},
year = {2018},
date = {2018-04-26},
urldate = {2018-04-26},
booktitle = {Proceedings of the European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN'18)},
pages = {345-354},
publisher = {i6doc.com},
address = {Louvain-la-Neuve, Belgium},
abstract = {Many of the current scientific advances in the life sciences have their origin in the intensive use of data for knowledge discovery. In no area this is so clear as in bioinformatics, led by technological breakthroughs in data acquisition technologies. It has been argued that bioinformatics could quickly become the field of research generating the largest data repositories, beating other data-intensive areas such as high-energy physics or astroinformatics. Over the last decade, deep learning has become a disruptive advance in machine learning, giving new live to the long-standing connectionist paradigm in artificial intelligence. Deep learning methods are ideally suited to large-scale data and, therefore, they should be ideally suited to knowledge discovery in bioinformatics and biomedicine at large. In this brief paper, we review key aspects of the application of deep learning in bioinformatics and medicine, drawing from the themes covered by the contributions to an ESANN 2018 special session devoted to this topic.},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
2017
@conference{dl2017,
title = {Hidden Tree Markov Networks: Deep and Wide Learning for Structured Data},
author = {Bacciu Davide},
url = {https://arxiv.org/abs/1711.07784},
year = {2017},
date = {2017-11-27},
urldate = {2017-11-27},
booktitle = {Proc. of the 2017 IEEE Symposium Series on Computational Intelligence (SSCI'17)},
publisher = {IEEE},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
@conference{iml2017,
title = {On the Need of Machine Learning as a Service for the Internet of Things},
author = {Bacciu Davide and Chessa Stefano and Gallicchio Claudio and Micheli Alessio},
isbn = {978-1-4503-5243-7},
year = {2017},
date = {2017-10-18},
booktitle = {To appear in the Proc. of the International Conference on Internet of Things and Machine Learning (IML 2017)},
journal = {Proc},
publisher = {ACM},
series = {International Conference Proceedings Series (ICPS)},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
@conference{ijcnn2017,
title = {DropIn: Making Neural Networks Robust to Missing Inputs by Dropout},
author = {Bacciu Davide and Crecchi Francesco and Morelli Davide},
url = {https://arxiv.org/abs/1705.02643},
doi = {10.1109/IJCNN.2017.7966106},
isbn = {978-1-5090-6182-2},
year = {2017},
date = {2017-05-19},
urldate = {2017-05-19},
booktitle = {Proceedings of the 2017 International Joint Conference on Neural Networks (IJCNN 2017) },
pages = {2080-2087},
publisher = {IEEE},
abstract = {The paper presents a novel, principled approach to train recurrent neural networks from the Reservoir Computing family that are robust to missing part of the input features at prediction time. By building on the ensembling properties of Dropout regularization, we propose a methodology, named DropIn, which efficiently trains a neural model as a committee machine of subnetworks, each capable of predicting with a subset of the original input features. We discuss the application of the DropIn methodology in the context of Reservoir Computing models and targeting applications characterized by input sources that are unreliable or prone to be disconnected, such as in pervasive wireless sensor networks and ambient intelligence. We provide an experimental assessment using real-world data from such application domains, showing how the Dropin methodology allows to maintain predictive performances comparable to those of a model without missing features, even when 20%–50% of the inputs are not available.},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
@conference{esann2017,
title = {ELM Preference Learning for Physiological Data},
author = {Bacciu Davide and Colombo Michele and Morelli Davide and Plans David},
editor = {Michel Verleysen},
isbn = {978-2-875870384},
year = {2017},
date = {2017-04-28},
urldate = {2017-04-28},
booktitle = {Proceedings of the European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN'17)},
pages = {99-104},
publisher = {i6doc.com},
address = {Louvain-la-Neuve, Belgium},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
2016
@conference{ie2016,
title = { Detecting socialization events in ageing people: the experienze of the DOREMI project},
author = {Bacciu Davide and Chessa Stefano and Ferro Erina and Fortunati Luigi and Gallicchio Claudio and La Rosa Davide and Llorente Miguel and Micheli Alessio and Palumbo Filippo and Parodi Oberdan and Valenti Andrea and Vozzi Federico},
doi = {10.1109/IE.2016.28},
issn = {2472-7571 },
year = {2016},
date = {2016-10-27},
urldate = {2016-10-27},
booktitle = {Proceedings of the IEEE 12th International Conference on Intelligent Environments (IE 2016), },
pages = {132-135},
publisher = {IEEE},
address = {UK, London},
abstract = {The detection of socialization events is useful to build indicators about social isolation of people, which is an important indicator in e-health applications. On the other hand, it is rather difficult to achieve with non-invasive solutions. This paper reports about the currently work-in-progress on the technological solution for the detection of socialization events adopted in the DOREMI project.},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
@conference{fun2016,
title = {An Investigation into Cybernetic Humor, or: Can Machines Laugh?},
author = {Bacciu Davide and Gervasi Vincenzo and Prencipe Giuseppe},
editor = {Erik D. Demaine and Fabrizio Grandoni},
url = {http://drops.dagstuhl.de/opus/volltexte/2016/5882},
doi = {10.4230/LIPIcs.FUN.2016.3},
issn = {1868-8969},
year = {2016},
date = {2016-06-10},
booktitle = {Proceedings of the 8th International Conference on Fun with Algorithms (FUN'16) },
volume = {49},
pages = {1-15},
publisher = {Schloss Dagstuhl-Leibniz-Zentrum fuer Informatik},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
abstract = {The mechanisms of humour have been the subject of much study and investigation, starting with and up to our days. Much of this work is based on literary theories, put forward by some of the most eminent philosophers and thinkers of all times, or medical theories, investigating the impact of humor on brain activity or behaviour. Recent functional neuroimaging studies, for instance, have investigated the process of comprehending and appreciating humor by examining functional activity in distinctive regions of brains stimulated by joke corpora. Yet, there is precious little work on the computational side, possibly due to the less hilarious nature of computer scientists as compared to men of letters and sawbones. In this paper, we set to investigate whether literary theories of humour can stand the test of algorithmic laughter. Or, in other words, we ask ourselves the vexed question: Can machines laugh? We attempt to answer that question by testing whether an algorithm - namely, a neural network - can "understand" humour, and in particular whether it is possible to automatically identify abstractions that are predicted to be relevant by established literary theories about the mechanisms of humor. Notice that we do not focus here on distinguishing humorous from serious statements - a feat that is clearly way beyond the capabilities of the average human voter, not to mention the average machine - but rather on identifying the underlying mechanisms and triggers that are postulated to exist by literary theories, by verifying if similar mechanisms can be learned by machines. },
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
Conferences & Workshops
Dependable Integration Concepts for Human-Centric AI-based Systems Workshop Proceedings of the 16th International Workshop on Dependable Smart Embedded Cyber-Physical Systems and Systems-of-Systems (DECSoS 2021), 2021. TEACHING - Trustworthy autonomous cyber-physical applications through human-centred intelligence Conference Proceedings of the 2021 IEEE International Conference on Omni-Layer Intelligent Systems (COINS) , 2021. Distilled Replay: Overcoming Forgetting through Synthetic Samples Workshop IJCAI 2021 workshop on continual semi-supervised learning (CSSL 2021) , 2021. Modeling Edge Features with Deep Bayesian Graph Networks Conference Proceedings of the International Joint Conference on Neural Networks (IJCNN 2021), IEEE IEEE, 2021. MEG: Generating Molecular Counterfactual Explanations for Deep Graph Networks Conference Proceedings of the International Joint Conference on Neural Networks (IJCNN 2021), IEEE 2021. Federated Reservoir Computing Neural Networks Conference Proceedings of the International Joint Conference on Neural Networks (IJCNN 2021), IEEE, 2021. GraphGen-Redux: a Fast and Lightweight Recurrent Model for Labeled Graph Generation Conference Proceedings of the International Joint Conference on Neural Networks (IJCNN 2021), IEEE 2021. Graph Mixture Density Networks Conference Proceedings of the 38th International Conference on Machine Learning (ICML 2021), PMLR, 2021. Avalanche: an End-to-End Library for Continual Learning Workshop Proceedings of the CVPR 2021 Workshop on Continual Learning , IEEE, 2021. Context-aware Graph Convolutional Autoencoder Conference Proceedings of the 16th International Work Conference on Artificial Neural Networks (IWANN 2021), vol. 12862, LNCS Springer, 2021. Benchmarking Reservoir and Recurrent Neural Networks for Human State and Activity Recognition Conference Proceedings of the 16th International Work Conference on Artificial Neural Networks (IWANN 2021), vol. 12862, Springer, 2021. Catastrophic Forgetting in Deep Graph Networks: an Introductory Benchmark for Graph Classification Workshop The Web Conference 2021 Workshop on Graph Learning Benchmarks (GLB21), 2021. Generative Tomography Reconstruction Workshop 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Workshop on Deep Learning and Inverse Problems, 2020. K-plex Cover Pooling for Graph Neural Networks Workshop 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Workshop on Learning Meets Combinatorial Algorithms, 2020. Explaining Deep Graph Networks with Molecular Counterfactuals Workshop 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Workshop on Machine Learning for Molecules - Accepted as Contributed Talk (Oral), 2020. Short-Term Memory Optimization in Recurrent Neural Networks by Autoencoder-based Initialization Workshop 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Workshop on Beyond BackPropagation: Novel Ideas for Training Neural Architectures, 2020. Learning from Non-Binary Constituency Trees via Tensor Decomposition Conference PROCEEDINGS OF THE 2020 INTERNATIONAL CONFERENCE ON COMPUTATIONAL LINGUISTICS (COLING 2020), 2020. Proceedings of the 2020 IEEE International Conference on Systems, Man, and Cybernetics (SMC), IEEE, 2020. Generalising Recursive Neural Models by Tensor Decomposition Conference Proceedings of the 2020 IEEE World Congress on Computational Intelligence, 2020. Continual Learning with Gated Incremental Memories for Sequential Data Processing Conference Proceedings of the 2020 IEEE World Congress on Computational Intelligence, 2020. Learning a Latent Space of Style-Aware Music Representations by Adversarial Autoencoders Conference Proceedings of the 24th European Conference on Artificial Intelligence (ECAI 2020), 2020. Incremental training of a recurrent neural network exploiting a multi-scale dynamic memory Conference Proceedings of the European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases 2020 (ECML-PKDD 2020), Springer International Publishing, 2020. A Deep Generative Model for Fragment-Based Molecule Generation Conference Proceedings of the 23rd International Conference on Artificial Intelligence and Statistics (AISTATS 2020) , 2020. A Fair Comparison of Graph Neural Networks for Graph Classification Conference Proceedings of the Eighth International Conference on Learning Representations (ICLR 2020), 2020. Biochemical Pathway Robustness Prediction with Graph Neural Networks Conference Proceedings of the European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN'20), 2020. Theoretically Expressive and Edge-aware Graph Learning Conference Proceedings of the European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN'20), 2020. Perplexity-free Parametric t-SNE Conference Proceedings of the European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN'20), 2020. Tensor Decompositions in Deep Learning Conference Proceedings of the European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN'20), 2020. Tensor Decompositions in Recursive Neural Networks for Tree-Structured Data Conference Proceedings of the European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN'20), 2020. Sequential Sentence Embeddings for Semantic Similarity Conference Proceedings of the 2019 IEEE Symposium Series on Computational Intelligence (SSCI'19), IEEE, 2019. A non-negative factorization approach to node pooling in graph convolutional neural networks Conference Proceedings of the 18th International Conference of the Italian Association for Artificial Intelligence (AIIA 2019), Lecture Notes in Artificial Intelligence Springer-Verlag, 2019. Suitable doesn’t mean attractive. Human-based evaluation of automatically generated headlines Conference Proceedings of the 6th Italian Conference on Computational Linguistics (CLiC-it 2019), vol. 2481 , AI*IA series CEUR, 2019. Linear Memory Networks Conference Proceedings of the 28th International Conference on Artificial Neural Networks (ICANN 2019), , vol. 11727, Lecture Notes in Computer Science Springer-Verlag, 2019. Bayesian Tensor Factorisation for Bottom-up Hidden Tree Markov Models Conference Proceedings of the 2019 International Joint Conference on Neural Networks (IJCNN 2019I) , IEEE, 2019. Detecting Black-box Adversarial Examples through Nonlinear Dimensionality Reduction Conference Proceedings of the European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN'19), i6doc.com, Louvain-la-Neuve, Belgium, 2019. Graph generation by sequential edge prediction Conference Proceedings of the European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN'19), i6doc.com, Louvain-la-Neuve, Belgium, 2019. Societal Issues in Machine Learning: When Learning from Data is Not Enough Conference Proceedings of the European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN'19), i6doc.com, Louvain-la-Neuve, Belgium, 2019. Deep Tree Transductions - A Short Survey Conference Proceedings of the 2019 INNS Big Data and Deep Learning (INNSBDDL 2019) , Recent Advances in Big Data and Deep Learning Springer International Publishing, 2019. Text Summarization as Tree Transduction by Top-Down TreeLSTM Conference Proceedings of the 2018 IEEE Symposium Series on Computational Intelligence (SSCI'18), IEEE, 2018. Learning Tree Distributions by Hidden Markov Models Workshop Proceedings of the FLOC 2018 Workshop on Learning and Automata (LearnAut'18), 2018. Contextual Graph Markov Model: A Deep and Generative Approach to Graph Processing Conference Proceedings of the 35th International Conference on Machine Learning (ICML 2018), 2018. Concentric ESN: Assessing the Effect of Modularity in Cycle Reservoirs Conference Proceedings of the 2018 International Joint Conference on Neural Networks (IJCNN 2018) , IEEE, 2018. Mixture of Hidden Markov Models as Tree Encoder Conference Proceedings of the European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN'18), i6doc.com, Louvain-la-Neuve, Belgium, 2018, ISBN: 978-287587047-6. Bioinformatics and medicine in the era of deep learning Conference Proceedings of the European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN'18), i6doc.com, Louvain-la-Neuve, Belgium, 2018, ISBN: 978-287587047-6. Hidden Tree Markov Networks: Deep and Wide Learning for Structured Data Conference Proc. of the 2017 IEEE Symposium Series on Computational Intelligence (SSCI'17), IEEE, 2017. On the Need of Machine Learning as a Service for the Internet of Things Conference To appear in the Proc. of the International Conference on Internet of Things and Machine Learning (IML 2017), International Conference Proceedings Series (ICPS) ACM, 2017, ISBN: 978-1-4503-5243-7. DropIn: Making Neural Networks Robust to Missing Inputs by Dropout Conference Proceedings of the 2017 International Joint Conference on Neural Networks (IJCNN 2017) , IEEE, 2017, ISBN: 978-1-5090-6182-2. ELM Preference Learning for Physiological Data Conference Proceedings of the European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN'17), i6doc.com, Louvain-la-Neuve, Belgium, 2017, ISBN: 978-2-875870384. Detecting socialization events in ageing people: the experienze of the DOREMI project Conference Proceedings of the IEEE 12th International Conference on Intelligent Environments (IE 2016), , IEEE, UK, London, 2016, ISSN: 2472-7571 . An Investigation into Cybernetic Humor, or: Can Machines Laugh? Conference Proceedings of the 8th International Conference on Fun with Algorithms (FUN'16) , vol. 49, Leibniz International Proceedings in Informatics (LIPIcs) Schloss Dagstuhl-Leibniz-Zentrum fuer Informatik, 2016, ISSN: 1868-8969.2021
2020
2019
2018
2017
2016