Here you can find a consolidated (a.k.a. slowly updated) list of my publications. A frequently updated (and possibly noisy) list of works is available on my Google Scholar profile.
Please find below a short list of highlight publications for my recent activity.

Castellana, Daniele; Bacciu, Davide
Bayesian Tensor Factorisation for Bottom-up Hidden Tree Markov Models Conference
Proceedings of the 2019 International Joint Conference on Neural Networks (IJCNN 2019I) , IEEE, 2019.
@conference{ijcnn2019,
title = {Bayesian Tensor Factorisation for Bottom-up Hidden Tree Markov Models},
author = {Daniele Castellana and Davide Bacciu},
url = {https://arxiv.org/pdf/1905.13528.pdf},
year = {2019},
date = {2019-07-15},
urldate = {2019-07-15},
booktitle = {Proceedings of the 2019 International Joint Conference on Neural Networks (IJCNN 2019I) },
publisher = {IEEE},
abstract = {Bottom-Up Hidden Tree Markov Model is a highly expressive model for tree-structured data. Unfortunately, it cannot be used in practice due to the intractable size of its state-transition matrix. We propose a new approximation which lies on the Tucker factorisation of tensors. The probabilistic interpretation of such approximation allows us to define a new probabilistic model for tree-structured data. Hence, we define the new approximated model and we derive its learning algorithm. Then, we empirically assess the effective power of the new model evaluating it on two different tasks. In both cases, our model outperforms the other approximated model known in the literature.},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
Bottom-Up Hidden Tree Markov Model is a highly expressive model for tree-structured data. Unfortunately, it cannot be used in practice due to the intractable size of its state-transition matrix. We propose a new approximation which lies on the Tucker factorisation of tensors. The probabilistic interpretation of such approximation allows us to define a new probabilistic model for tree-structured data. Hence, we define the new approximated model and we derive its learning algorithm. Then, we empirically assess the effective power of the new model evaluating it on two different tasks. In both cases, our model outperforms the other approximated model known in the literature.