{"id":1344,"date":"2023-12-30T18:14:48","date_gmt":"2023-12-30T17:14:48","guid":{"rendered":"http:\/\/pages.di.unipi.it\/bacciu\/?page_id=1344"},"modified":"2024-01-02T15:38:23","modified_gmt":"2024-01-02T14:38:23","slug":"journals","status":"publish","type":"page","link":"https:\/\/pages.di.unipi.it\/bacciu\/publications\/journals\/","title":{"rendered":"Journals"},"content":{"rendered":"\n<p><code><div class=\"teachpress_pub_list\"><form name=\"tppublistform\" method=\"get\"><a name=\"tppubs\" id=\"tppubs\"><\/a><\/form><div class=\"tablenav\"><div class=\"tablenav-pages\"><span class=\"displaying-num\">59 entries<\/span> <a class=\"page-numbers button disabled\">&laquo;<\/a> <a class=\"page-numbers button disabled\">&lsaquo;<\/a> 1 of 2 <a href=\"https:\/\/pages.di.unipi.it\/bacciu\/publications\/journals\/?limit=2&amp;tgid=&amp;yr=&amp;type=&amp;usr=&amp;auth=&amp;tsr=#tppubs\" title=\"next page\" class=\"page-numbers button\">&rsaquo;<\/a> <a href=\"https:\/\/pages.di.unipi.it\/bacciu\/publications\/journals\/?limit=2&amp;tgid=&amp;yr=&amp;type=&amp;usr=&amp;auth=&amp;tsr=#tppubs\" title=\"last page\" class=\"page-numbers button\">&raquo;<\/a> <\/div><\/div><div class=\"teachpress_publication_list\"><h3 class=\"tp_h3\" id=\"tp_h3_2024\">2024<\/h3><div class=\"tp_publication tp_publication_article\"><div class=\"tp_pub_number\">1.<\/div><div class=\"tp_pub_info\"><p class=\"tp_pub_author\"> Carta, Antonio;  Cossu, Andrea;  Lomonaco, Vincenzo;  Bacciu, Davide;  Weijer, Joost<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('257','tp_links')\" style=\"cursor:pointer;\">Projected Latent Distillation for Data-Agnostic Consolidation in distributed continual learning<\/a> <span class=\"tp_pub_type article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">Neurocomputing, <\/span><span class=\"tp_pub_additional_volume\">vol. 598, <\/span><span class=\"tp_pub_additional_pages\">pp. 127935, <\/span><span class=\"tp_pub_additional_year\">2024<\/span>, <span class=\"tp_pub_additional_issn\">ISSN: 0925-2312<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_257\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('257','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_257\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('257','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_257\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('257','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_257\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{CARTA2024127935,<br \/>\r\ntitle = {Projected Latent Distillation for Data-Agnostic Consolidation in distributed continual learning},<br \/>\r\nauthor = {Antonio Carta and Andrea Cossu and Vincenzo Lomonaco and Davide Bacciu and Joost Weijer},<br \/>\r\nurl = {https:\/\/www.sciencedirect.com\/science\/article\/pii\/S0925231224007069},<br \/>\r\ndoi = {https:\/\/doi.org\/10.1016\/j.neucom.2024.127935},<br \/>\r\nissn = {0925-2312},<br \/>\r\nyear  = {2024},<br \/>\r\ndate = {2024-01-01},<br \/>\r\nurldate = {2024-01-01},<br \/>\r\njournal = {Neurocomputing},<br \/>\r\nvolume = {598},<br \/>\r\npages = {127935},<br \/>\r\nabstract = {In continual learning applications on-the-edge multiple self-centered devices (SCD) learn different local tasks independently, with each SCD only optimizing its own task. Can we achieve (almost) zero-cost collaboration between different devices? We formalize this problem as a Distributed Continual Learning (DCL) scenario, where SCDs greedily adapt to their own local tasks and a separate continual learning (CL) model perform a sparse and asynchronous consolidation step that combines the SCD models sequentially into a single multi-task model without using the original data. Unfortunately, current CL methods are not directly applicable to this scenario. We propose Data-Agnostic Consolidation (DAC), a novel double knowledge distillation method which performs distillation in the latent space via a novel Projected Latent Distillation loss. Experimental results show that DAC enables forward transfer between SCDs and reaches state-of-the-art accuracy on Split CIFAR100, CORe50 and Split TinyImageNet, both in single device and distributed CL scenarios. Somewhat surprisingly, a single out-of-distribution image is sufficient as the only source of data for DAC.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('257','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_257\" style=\"display:none;\"><div class=\"tp_abstract_entry\">In continual learning applications on-the-edge multiple self-centered devices (SCD) learn different local tasks independently, with each SCD only optimizing its own task. Can we achieve (almost) zero-cost collaboration between different devices? We formalize this problem as a Distributed Continual Learning (DCL) scenario, where SCDs greedily adapt to their own local tasks and a separate continual learning (CL) model perform a sparse and asynchronous consolidation step that combines the SCD models sequentially into a single multi-task model without using the original data. Unfortunately, current CL methods are not directly applicable to this scenario. We propose Data-Agnostic Consolidation (DAC), a novel double knowledge distillation method which performs distillation in the latent space via a novel Projected Latent Distillation loss. Experimental results show that DAC enables forward transfer between SCDs and reaches state-of-the-art accuracy on Split CIFAR100, CORe50 and Split TinyImageNet, both in single device and distributed CL scenarios. Somewhat surprisingly, a single out-of-distribution image is sufficient as the only source of data for DAC.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('257','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_257\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"fas fa-globe\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/www.sciencedirect.com\/science\/article\/pii\/S0925231224007069\" title=\"https:\/\/www.sciencedirect.com\/science\/article\/pii\/S0925231224007069\" target=\"_blank\">https:\/\/www.sciencedirect.com\/science\/article\/pii\/S0925231224007069<\/a><\/li><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/https:\/\/doi.org\/10.1016\/j.neucom.2024.127935\" title=\"Follow DOI:https:\/\/doi.org\/10.1016\/j.neucom.2024.127935\" target=\"_blank\">doi:https:\/\/doi.org\/10.1016\/j.neucom.2024.127935<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('257','tp_links')\">Close<\/a><\/p><\/div><\/div><div class=\"tp_pub_image_right\"><img decoding=\"async\" name=\"Projected Latent Distillation for Data-Agnostic Consolidation in distributed continual learning\" src=\"https:\/\/pages.di.unipi.it\/bacciu\/wp-content\/uploads\/sites\/12\/2024\/01\/neurocomputing.png\" width=\"80\" alt=\"Projected Latent Distillation for Data-Agnostic Consolidation in distributed continual learning\" \/><\/div><\/div><div class=\"tp_publication tp_publication_article\"><div class=\"tp_pub_number\">2.<\/div><div class=\"tp_pub_info\"><p class=\"tp_pub_author\"> Cossu, Andrea;  Spinnato, Francesco;  Guidotti, Riccardo;  Bacciu, Davide<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('259','tp_links')\" style=\"cursor:pointer;\">Drifting explanations in continual learning<\/a> <span class=\"tp_pub_type article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">Neurocomputing, <\/span><span class=\"tp_pub_additional_volume\">vol. 597, <\/span><span class=\"tp_pub_additional_pages\">pp. 127960, <\/span><span class=\"tp_pub_additional_year\">2024<\/span>, <span class=\"tp_pub_additional_issn\">ISSN: 0925-2312<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_259\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('259','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_259\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('259','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_259\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('259','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_259\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{COSSU2024127960,<br \/>\r\ntitle = {Drifting explanations in continual learning},<br \/>\r\nauthor = {Andrea Cossu and Francesco Spinnato and Riccardo Guidotti and Davide Bacciu},<br \/>\r\nurl = {https:\/\/www.sciencedirect.com\/science\/article\/pii\/S0925231224007318},<br \/>\r\ndoi = {https:\/\/doi.org\/10.1016\/j.neucom.2024.127960},<br \/>\r\nissn = {0925-2312},<br \/>\r\nyear  = {2024},<br \/>\r\ndate = {2024-01-01},<br \/>\r\nurldate = {2024-01-01},<br \/>\r\njournal = {Neurocomputing},<br \/>\r\nvolume = {597},<br \/>\r\npages = {127960},<br \/>\r\nabstract = {Continual Learning (CL) trains models on streams of data, with the aim of learning new information without forgetting previous knowledge. However, many of these models lack interpretability, making it difficult to understand or explain how they make decisions. This lack of interpretability becomes even more challenging given the non-stationary nature of the data streams in CL. Furthermore, CL strategies aimed at mitigating forgetting directly impact the learned representations. We study the behavior of different explanation methods in CL and propose CLEX (ContinuaL EXplanations), an evaluation protocol to robustly assess the change of explanations in Class-Incremental scenarios, where forgetting is pronounced. We observed that models with similar predictive accuracy do not generate similar explanations. Replay-based strategies, well-known to be some of the most effective ones in class-incremental scenarios, are able to generate explanations that are aligned to the ones of a model trained offline. On the contrary, naive fine-tuning often results in degenerate explanations that drift from the ones of an offline model. Finally, we discovered that even replay strategies do not always operate at best when applied to fully-trained recurrent models. Instead, randomized recurrent models (leveraging on an untrained recurrent component) clearly reduce the drift of the explanations. This discrepancy between fully-trained and randomized recurrent models, previously known only in the context of their predictive continual performance, is more general, including also continual explanations.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('259','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_259\" style=\"display:none;\"><div class=\"tp_abstract_entry\">Continual Learning (CL) trains models on streams of data, with the aim of learning new information without forgetting previous knowledge. However, many of these models lack interpretability, making it difficult to understand or explain how they make decisions. This lack of interpretability becomes even more challenging given the non-stationary nature of the data streams in CL. Furthermore, CL strategies aimed at mitigating forgetting directly impact the learned representations. We study the behavior of different explanation methods in CL and propose CLEX (ContinuaL EXplanations), an evaluation protocol to robustly assess the change of explanations in Class-Incremental scenarios, where forgetting is pronounced. We observed that models with similar predictive accuracy do not generate similar explanations. Replay-based strategies, well-known to be some of the most effective ones in class-incremental scenarios, are able to generate explanations that are aligned to the ones of a model trained offline. On the contrary, naive fine-tuning often results in degenerate explanations that drift from the ones of an offline model. Finally, we discovered that even replay strategies do not always operate at best when applied to fully-trained recurrent models. Instead, randomized recurrent models (leveraging on an untrained recurrent component) clearly reduce the drift of the explanations. This discrepancy between fully-trained and randomized recurrent models, previously known only in the context of their predictive continual performance, is more general, including also continual explanations.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('259','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_259\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"fas fa-globe\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/www.sciencedirect.com\/science\/article\/pii\/S0925231224007318\" title=\"https:\/\/www.sciencedirect.com\/science\/article\/pii\/S0925231224007318\" target=\"_blank\">https:\/\/www.sciencedirect.com\/science\/article\/pii\/S0925231224007318<\/a><\/li><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/https:\/\/doi.org\/10.1016\/j.neucom.2024.127960\" title=\"Follow DOI:https:\/\/doi.org\/10.1016\/j.neucom.2024.127960\" target=\"_blank\">doi:https:\/\/doi.org\/10.1016\/j.neucom.2024.127960<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('259','tp_links')\">Close<\/a><\/p><\/div><\/div><div class=\"tp_pub_image_right\"><img decoding=\"async\" name=\"Drifting explanations in continual learning\" src=\"https:\/\/pages.di.unipi.it\/bacciu\/wp-content\/uploads\/sites\/12\/2024\/01\/neurocomputing.png\" width=\"80\" alt=\"Drifting explanations in continual learning\" \/><\/div><\/div><div class=\"tp_publication tp_publication_article\"><div class=\"tp_pub_number\">3.<\/div><div class=\"tp_pub_info\"><p class=\"tp_pub_author\"> Gravina, Alessio;  Bacciu, Davide<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('266','tp_links')\" style=\"cursor:pointer;\">Deep Learning for Dynamic Graphs: Models and Benchmarks<\/a> <span class=\"tp_pub_type article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">IEEE Transactions on Neural Networks and Learning Systems, <\/span><span class=\"tp_pub_additional_pages\">pp. 1-14, <\/span><span class=\"tp_pub_additional_year\">2024<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_266\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('266','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_266\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('266','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_266\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('266','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_266\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{10490120,<br \/>\r\ntitle = {Deep Learning for Dynamic Graphs: Models and Benchmarks},<br \/>\r\nauthor = {Alessio Gravina and Davide Bacciu},<br \/>\r\ndoi = {10.1109\/TNNLS.2024.3379735},<br \/>\r\nyear  = {2024},<br \/>\r\ndate = {2024-01-01},<br \/>\r\nurldate = {2024-01-01},<br \/>\r\njournal = {IEEE Transactions on Neural Networks and Learning Systems},<br \/>\r\npages = {1-14},<br \/>\r\nabstract = {Recent progress in research on deep graph networks (DGNs) has led to a maturation of the domain of learning on graphs. Despite the growth of this research field, there are still important challenges that are yet unsolved. Specifically, there is an urge of making DGNs suitable for predictive tasks on real-world systems of interconnected entities, which evolve over time. With the aim of fostering research in the domain of dynamic graphs, first, we survey recent advantages in learning both temporal and spatial information, providing a comprehensive overview of the current state-of-the-art in the domain of representation learning for dynamic graphs. Second, we conduct a fair performance comparison among the most popular proposed approaches on node-and edge-level tasks, leveraging rigorous model selection and assessment for all the methods, thus establishing a sound baseline for evaluating new architectures and approaches.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('266','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_266\" style=\"display:none;\"><div class=\"tp_abstract_entry\">Recent progress in research on deep graph networks (DGNs) has led to a maturation of the domain of learning on graphs. Despite the growth of this research field, there are still important challenges that are yet unsolved. Specifically, there is an urge of making DGNs suitable for predictive tasks on real-world systems of interconnected entities, which evolve over time. With the aim of fostering research in the domain of dynamic graphs, first, we survey recent advantages in learning both temporal and spatial information, providing a comprehensive overview of the current state-of-the-art in the domain of representation learning for dynamic graphs. Second, we conduct a fair performance comparison among the most popular proposed approaches on node-and edge-level tasks, leveraging rigorous model selection and assessment for all the methods, thus establishing a sound baseline for evaluating new architectures and approaches.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('266','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_266\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.1109\/TNNLS.2024.3379735\" title=\"Follow DOI:10.1109\/TNNLS.2024.3379735\" target=\"_blank\">doi:10.1109\/TNNLS.2024.3379735<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('266','tp_links')\">Close<\/a><\/p><\/div><\/div><div class=\"tp_pub_image_right\"><img decoding=\"async\" name=\"Deep Learning for Dynamic Graphs: Models and Benchmarks\" src=\"https:\/\/pages.di.unipi.it\/bacciu\/wp-content\/uploads\/sites\/12\/2024\/01\/tnnls.jpg\" width=\"80\" alt=\"Deep Learning for Dynamic Graphs: Models and Benchmarks\" \/><\/div><\/div><div class=\"tp_publication tp_publication_article\"><div class=\"tp_pub_number\">4.<\/div><div class=\"tp_pub_info\"><p class=\"tp_pub_author\"> Zhang, Kun;  Shpitser, Ilya;  Magliacane, Sara;  Bacciu, Davide;  Wu, Fei;  Zhang, Changshui;  Spirtes, Peter<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('267','tp_links')\" style=\"cursor:pointer;\">IEEE Transactions on Neural Networks and Learning Systems Special Issue on Causal Discovery and Causality-Inspired Machine Learning<\/a> <span class=\"tp_pub_type article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">IEEE Transactions on Neural Networks and Learning Systems, <\/span><span class=\"tp_pub_additional_volume\">vol. 35, <\/span><span class=\"tp_pub_additional_number\">no. 4, <\/span><span class=\"tp_pub_additional_pages\">pp. 4899-4901, <\/span><span class=\"tp_pub_additional_year\">2024<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_267\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('267','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_267\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('267','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_267\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('267','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_267\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{10492646,<br \/>\r\ntitle = {IEEE Transactions on Neural Networks and Learning Systems Special Issue on Causal Discovery and Causality-Inspired Machine Learning},<br \/>\r\nauthor = {Kun Zhang and Ilya Shpitser and Sara Magliacane and Davide Bacciu and Fei Wu and Changshui Zhang and Peter Spirtes},<br \/>\r\ndoi = {10.1109\/TNNLS.2024.3365968},<br \/>\r\nyear  = {2024},<br \/>\r\ndate = {2024-01-01},<br \/>\r\nurldate = {2024-01-01},<br \/>\r\njournal = {IEEE Transactions on Neural Networks and Learning Systems},<br \/>\r\nvolume = {35},<br \/>\r\nnumber = {4},<br \/>\r\npages = {4899-4901},<br \/>\r\nabstract = {Causality is a fundamental notion in science and engineering. It has attracted much interest across research communities in statistics, machine learning (ML), healthcare, and artificial intelligence (AI), and is becoming increasingly recognized as a vital research area. One of the fundamental problems in causality is how to find the causal structure or the underlying causal model. Accordingly, one focus of this Special Issue is on causal discovery , i.e., how can we discover causal structure over a set of variables from observational data with automated procedures? Besides learning causality, another focus is on using causality to help understand and advance ML, that is, causality-inspired ML.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('267','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_267\" style=\"display:none;\"><div class=\"tp_abstract_entry\">Causality is a fundamental notion in science and engineering. It has attracted much interest across research communities in statistics, machine learning (ML), healthcare, and artificial intelligence (AI), and is becoming increasingly recognized as a vital research area. One of the fundamental problems in causality is how to find the causal structure or the underlying causal model. Accordingly, one focus of this Special Issue is on causal discovery , i.e., how can we discover causal structure over a set of variables from observational data with automated procedures? Besides learning causality, another focus is on using causality to help understand and advance ML, that is, causality-inspired ML.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('267','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_267\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.1109\/TNNLS.2024.3365968\" title=\"Follow DOI:10.1109\/TNNLS.2024.3365968\" target=\"_blank\">doi:10.1109\/TNNLS.2024.3365968<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('267','tp_links')\">Close<\/a><\/p><\/div><\/div><div class=\"tp_pub_image_right\"><img decoding=\"async\" name=\"IEEE Transactions on Neural Networks and Learning Systems Special Issue on Causal Discovery and Causality-Inspired Machine Learning\" src=\"https:\/\/pages.di.unipi.it\/bacciu\/wp-content\/uploads\/sites\/12\/2024\/01\/tnnls.jpg\" width=\"80\" alt=\"IEEE Transactions on Neural Networks and Learning Systems Special Issue on Causal Discovery and Causality-Inspired Machine Learning\" \/><\/div><\/div><div class=\"tp_publication tp_publication_article\"><div class=\"tp_pub_number\">5.<\/div><div class=\"tp_pub_info\"><p class=\"tp_pub_author\"> Cossu, Andrea;  Carta, Antonio;  Passaro, Lucia;  Lomonaco, Vincenzo;  Tuytelaars, Tinne;  Bacciu, Davide<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('255','tp_links')\" style=\"cursor:pointer;\">Continual pre-training mitigates forgetting in language and vision<\/a> <span class=\"tp_pub_type article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">Neural Networks, <\/span><span class=\"tp_pub_additional_volume\">vol. 179, <\/span><span class=\"tp_pub_additional_pages\">pp. 106492, <\/span><span class=\"tp_pub_additional_year\">2024<\/span>, <span class=\"tp_pub_additional_issn\">ISSN: 0893-6080<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_255\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('255','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_255\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('255','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_255\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('255','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_255\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{COSSU2024106492,<br \/>\r\ntitle = {Continual pre-training mitigates forgetting in language and vision},<br \/>\r\nauthor = {Andrea Cossu and Antonio Carta and Lucia Passaro and Vincenzo Lomonaco and Tinne Tuytelaars and Davide Bacciu},<br \/>\r\nurl = {https:\/\/www.sciencedirect.com\/science\/article\/pii\/S0893608024004167},<br \/>\r\ndoi = {https:\/\/doi.org\/10.1016\/j.neunet.2024.106492},<br \/>\r\nissn = {0893-6080},<br \/>\r\nyear  = {2024},<br \/>\r\ndate = {2024-01-01},<br \/>\r\nurldate = {2024-01-01},<br \/>\r\njournal = {Neural Networks},<br \/>\r\nvolume = {179},<br \/>\r\npages = {106492},<br \/>\r\nabstract = {Pre-trained models are commonly used in Continual Learning to initialize the model before training on the stream of non-stationary data. However, pre-training is rarely applied during Continual Learning. We investigate the characteristics of the Continual Pre-Training scenario, where a model is continually pre-trained on a stream of incoming data and only later fine-tuned to different downstream tasks. We introduce an evaluation protocol for Continual Pre-Training which monitors forgetting against a Forgetting Control dataset not present in the continual stream. We disentangle the impact on forgetting of 3 main factors: the input modality (NLP, Vision), the architecture type (Transformer, ResNet) and the pre-training protocol (supervised, self-supervised). Moreover, we propose a Sample-Efficient Pre-training method (SEP) that speeds up the pre-training phase. We show that the pre-training protocol is the most important factor accounting for forgetting. Surprisingly, we discovered that self-supervised continual pre-training in both NLP and Vision is sufficient to mitigate forgetting without the use of any Continual Learning strategy. Other factors, like model depth, input modality and architecture type are not as crucial.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('255','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_255\" style=\"display:none;\"><div class=\"tp_abstract_entry\">Pre-trained models are commonly used in Continual Learning to initialize the model before training on the stream of non-stationary data. However, pre-training is rarely applied during Continual Learning. We investigate the characteristics of the Continual Pre-Training scenario, where a model is continually pre-trained on a stream of incoming data and only later fine-tuned to different downstream tasks. We introduce an evaluation protocol for Continual Pre-Training which monitors forgetting against a Forgetting Control dataset not present in the continual stream. We disentangle the impact on forgetting of 3 main factors: the input modality (NLP, Vision), the architecture type (Transformer, ResNet) and the pre-training protocol (supervised, self-supervised). Moreover, we propose a Sample-Efficient Pre-training method (SEP) that speeds up the pre-training phase. We show that the pre-training protocol is the most important factor accounting for forgetting. Surprisingly, we discovered that self-supervised continual pre-training in both NLP and Vision is sufficient to mitigate forgetting without the use of any Continual Learning strategy. Other factors, like model depth, input modality and architecture type are not as crucial.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('255','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_255\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"fas fa-globe\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/www.sciencedirect.com\/science\/article\/pii\/S0893608024004167\" title=\"https:\/\/www.sciencedirect.com\/science\/article\/pii\/S0893608024004167\" target=\"_blank\">https:\/\/www.sciencedirect.com\/science\/article\/pii\/S0893608024004167<\/a><\/li><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/https:\/\/doi.org\/10.1016\/j.neunet.2024.106492\" title=\"Follow DOI:https:\/\/doi.org\/10.1016\/j.neunet.2024.106492\" target=\"_blank\">doi:https:\/\/doi.org\/10.1016\/j.neunet.2024.106492<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('255','tp_links')\">Close<\/a><\/p><\/div><\/div><div class=\"tp_pub_image_right\"><img decoding=\"async\" name=\"Continual pre-training mitigates forgetting in language and vision\" src=\"https:\/\/pages.di.unipi.it\/bacciu\/wp-content\/uploads\/sites\/12\/2020\/06\/08936080.jpg\" width=\"80\" alt=\"Continual pre-training mitigates forgetting in language and vision\" \/><\/div><\/div><h3 class=\"tp_h3\" id=\"tp_h3_2023\">2023<\/h3><div class=\"tp_publication tp_publication_article\"><div class=\"tp_pub_number\">6.<\/div><div class=\"tp_pub_info\"><p class=\"tp_pub_author\"> Lepri, Marco;  Bacciu, Davide;  Santina, Cosimo Della<\/p><p class=\"tp_pub_title\">Neural Autoencoder-Based Structure-Preserving Model Order Reduction and Control Design for High-Dimensional Physical Systems <span class=\"tp_pub_type article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">IEEE Control Systems Letters, <\/span><span class=\"tp_pub_additional_year\">2023<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_248\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('248','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_248\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{lepri2023neural,<br \/>\r\ntitle = {Neural Autoencoder-Based Structure-Preserving Model Order Reduction and Control Design for High-Dimensional Physical Systems},<br \/>\r\nauthor = {Marco Lepri and Davide Bacciu and Cosimo Della Santina},<br \/>\r\nyear  = {2023},<br \/>\r\ndate = {2023-12-21},<br \/>\r\nurldate = {2023-01-01},<br \/>\r\njournal = {IEEE Control Systems Letters},<br \/>\r\npublisher = {IEEE},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('248','tp_bibtex')\">Close<\/a><\/p><\/div><\/div><div class=\"tp_pub_image_right\"><img decoding=\"async\" name=\"Neural Autoencoder-Based Structure-Preserving Model Order Reduction and Control Design for High-Dimensional Physical Systems\" src=\"http:\/\/pages.di.unipi.it\/bacciu\/wp-content\/uploads\/sites\/12\/2024\/01\/ieeecss.png\" width=\"80\" alt=\"Neural Autoencoder-Based Structure-Preserving Model Order Reduction and Control Design for High-Dimensional Physical Systems\" \/><\/div><\/div><div class=\"tp_publication tp_publication_article\"><div class=\"tp_pub_number\">7.<\/div><div class=\"tp_pub_info\"><p class=\"tp_pub_author\"> Errica, Federico;  Bacciu, Davide;  Micheli, Alessio<\/p><p class=\"tp_pub_title\">PyDGN: a Python Library for Flexible and Reproducible Research on Deep Learning for Graphs <span class=\"tp_pub_type article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">Journal of Open Source Software, <\/span><span class=\"tp_pub_additional_volume\">vol. 8, <\/span><span class=\"tp_pub_additional_number\">no. 90, <\/span><span class=\"tp_pub_additional_pages\">pp. 5713, <\/span><span class=\"tp_pub_additional_year\">2023<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_249\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('249','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_249\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{errica2023pydgn,<br \/>\r\ntitle = {PyDGN: a Python Library for Flexible and Reproducible Research on Deep Learning for Graphs},<br \/>\r\nauthor = {Federico Errica and Davide Bacciu and Alessio Micheli},<br \/>\r\nyear  = {2023},<br \/>\r\ndate = {2023-10-31},<br \/>\r\nurldate = {2023-01-01},<br \/>\r\njournal = {Journal of Open Source Software},<br \/>\r\nvolume = {8},<br \/>\r\nnumber = {90},<br \/>\r\npages = {5713},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('249','tp_bibtex')\">Close<\/a><\/p><\/div><\/div><div class=\"tp_pub_image_right\"><img decoding=\"async\" name=\"PyDGN: a Python Library for Flexible and Reproducible Research on Deep Learning for Graphs\" src=\"http:\/\/pages.di.unipi.it\/bacciu\/wp-content\/uploads\/sites\/12\/2024\/01\/joss.png\" width=\"80\" alt=\"PyDGN: a Python Library for Flexible and Reproducible Research on Deep Learning for Graphs\" \/><\/div><\/div><div class=\"tp_publication tp_publication_article\"><div class=\"tp_pub_number\">8.<\/div><div class=\"tp_pub_info\"><p class=\"tp_pub_author\"> Bacciu, Davide;  Errica, Federico;  Gravina, Alessio;  Madeddu, Lorenzo;  Podda, Marco;  Stilo, Giovanni<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('224','tp_links')\" style=\"cursor:pointer;\">Deep Graph Networks for Drug Repurposing with Multi-Protein Targets<\/a> <span class=\"tp_pub_type article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">IEEE Transactions on Emerging Topics in Computing, 2023, <\/span><span class=\"tp_pub_additional_year\">2023<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_resource_link\"><a id=\"tp_links_sh_224\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('224','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_224\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('224','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_224\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{Bacciu2023b,<br \/>\r\ntitle = {Deep Graph Networks for Drug Repurposing with Multi-Protein Targets},<br \/>\r\nauthor = {Davide Bacciu and Federico Errica and Alessio Gravina and Lorenzo Madeddu and Marco Podda and Giovanni Stilo},<br \/>\r\ndoi = {10.1109\/TETC.2023.3238963},<br \/>\r\nyear  = {2023},<br \/>\r\ndate = {2023-02-01},<br \/>\r\nurldate = {2023-02-01},<br \/>\r\njournal = {IEEE Transactions on Emerging Topics in Computing, 2023},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('224','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_224\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.1109\/TETC.2023.3238963\" title=\"Follow DOI:10.1109\/TETC.2023.3238963\" target=\"_blank\">doi:10.1109\/TETC.2023.3238963<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('224','tp_links')\">Close<\/a><\/p><\/div><\/div><div class=\"tp_pub_image_right\"><img decoding=\"async\" name=\"Deep Graph Networks for Drug Repurposing with Multi-Protein Targets\" src=\"http:\/\/pages.di.unipi.it\/bacciu\/wp-content\/uploads\/sites\/12\/2024\/01\/ieeetranscomp.jpg\" width=\"80\" alt=\"Deep Graph Networks for Drug Repurposing with Multi-Protein Targets\" \/><\/div><\/div><div class=\"tp_publication tp_publication_article\"><div class=\"tp_pub_number\">9.<\/div><div class=\"tp_pub_info\"><p class=\"tp_pub_author\"> Lanciano, Giacomo;  Galli, Filippo;  Cucinotta, Tommaso;  Bacciu, Davide;  Passarella, Andrea<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('243','tp_links')\" style=\"cursor:pointer;\">Extending OpenStack Monasca for Predictive Elasticity Control<\/a> <span class=\"tp_pub_type article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">Big Data Mining and Analytics, <\/span><span class=\"tp_pub_additional_year\">2023<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_resource_link\"><a id=\"tp_links_sh_243\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('243','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_243\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('243','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_243\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{Lanciano2023extending,<br \/>\r\ntitle = {Extending OpenStack Monasca for Predictive Elasticity Control},<br \/>\r\nauthor = {Giacomo Lanciano and Filippo Galli and Tommaso Cucinotta and Davide Bacciu and Andrea Passarella},<br \/>\r\ndoi = {10.26599\/BDMA.2023.9020014},<br \/>\r\nyear  = {2023},<br \/>\r\ndate = {2023-01-01},<br \/>\r\nurldate = {2023-01-01},<br \/>\r\njournal = {Big Data Mining and Analytics},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('243','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_243\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.26599\/BDMA.2023.9020014\" title=\"Follow DOI:10.26599\/BDMA.2023.9020014\" target=\"_blank\">doi:10.26599\/BDMA.2023.9020014<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('243','tp_links')\">Close<\/a><\/p><\/div><\/div><div class=\"tp_pub_image_right\"><img decoding=\"async\" name=\"Extending OpenStack Monasca for Predictive Elasticity Control\" src=\"http:\/\/pages.di.unipi.it\/bacciu\/wp-content\/uploads\/sites\/12\/2024\/01\/bda.jpg\" width=\"80\" alt=\"Extending OpenStack Monasca for Predictive Elasticity Control\" \/><\/div><\/div><div class=\"tp_publication tp_publication_article\"><div class=\"tp_pub_number\">10.<\/div><div class=\"tp_pub_info\"><p class=\"tp_pub_author\"> Caro, Valerio De;  Gallicchio, Claudio;  Bacciu, Davide<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('244','tp_links')\" style=\"cursor:pointer;\">Continual adaptation of federated reservoirs in pervasive environments<\/a> <span class=\"tp_pub_type article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">Neurocomputing, <\/span><span class=\"tp_pub_additional_pages\">pp. 126638, <\/span><span class=\"tp_pub_additional_year\">2023<\/span>, <span class=\"tp_pub_additional_issn\">ISSN: 0925-2312<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_244\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('244','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_244\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('244','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_244\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('244','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_244\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{DECARO2023126638,<br \/>\r\ntitle = {Continual adaptation of federated reservoirs in pervasive environments},<br \/>\r\nauthor = {Valerio De Caro and Claudio Gallicchio and Davide Bacciu},<br \/>\r\nurl = {https:\/\/www.sciencedirect.com\/science\/article\/pii\/S0925231223007610},<br \/>\r\ndoi = {https:\/\/doi.org\/10.1016\/j.neucom.2023.126638},<br \/>\r\nissn = {0925-2312},<br \/>\r\nyear  = {2023},<br \/>\r\ndate = {2023-01-01},<br \/>\r\nurldate = {2023-01-01},<br \/>\r\njournal = {Neurocomputing},<br \/>\r\npages = {126638},<br \/>\r\nabstract = {When performing learning tasks in pervasive environments, the main challenge arises from the need of combining federated and continual settings. The former comes from the massive distribution of devices with privacy-regulated data. The latter is required by the low resources of the participating devices, which may retain data for short periods of time. In this paper, we propose a setup for learning with Echo State Networks (ESNs) in pervasive environments. Our proposal focuses on the use of Intrinsic Plasticity (IP), a gradient-based method for adapting the reservoir\u2019s non-linearity. First, we extend the objective function of IP to include the uncertainty arising from the distribution of the data over space and time. Then, we propose Federated Intrinsic Plasticity (FedIP), which is intended for client\u2013server federated topologies with stationary data, and adapts the learning scheme provided by Federated Averaging (FedAvg) to include the learning rule of IP. Finally, we further extend this algorithm for learning to Federated Continual Intrinsic Plasticity (FedCLIP) to equip clients with CL strategies for dealing with continuous data streams. We evaluate our approach on an incremental setup built upon real-world datasets from human monitoring, where we tune the complexity of the scenario in terms of the distribution of the data over space and time. Results show that both our algorithms improve the representation capabilities and the performance of the ESN, while being robust to catastrophic forgetting.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('244','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_244\" style=\"display:none;\"><div class=\"tp_abstract_entry\">When performing learning tasks in pervasive environments, the main challenge arises from the need of combining federated and continual settings. The former comes from the massive distribution of devices with privacy-regulated data. The latter is required by the low resources of the participating devices, which may retain data for short periods of time. In this paper, we propose a setup for learning with Echo State Networks (ESNs) in pervasive environments. Our proposal focuses on the use of Intrinsic Plasticity (IP), a gradient-based method for adapting the reservoir\u2019s non-linearity. First, we extend the objective function of IP to include the uncertainty arising from the distribution of the data over space and time. Then, we propose Federated Intrinsic Plasticity (FedIP), which is intended for client\u2013server federated topologies with stationary data, and adapts the learning scheme provided by Federated Averaging (FedAvg) to include the learning rule of IP. Finally, we further extend this algorithm for learning to Federated Continual Intrinsic Plasticity (FedCLIP) to equip clients with CL strategies for dealing with continuous data streams. We evaluate our approach on an incremental setup built upon real-world datasets from human monitoring, where we tune the complexity of the scenario in terms of the distribution of the data over space and time. Results show that both our algorithms improve the representation capabilities and the performance of the ESN, while being robust to catastrophic forgetting.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('244','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_244\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"fas fa-globe\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/www.sciencedirect.com\/science\/article\/pii\/S0925231223007610\" title=\"https:\/\/www.sciencedirect.com\/science\/article\/pii\/S0925231223007610\" target=\"_blank\">https:\/\/www.sciencedirect.com\/science\/article\/pii\/S0925231223007610<\/a><\/li><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/https:\/\/doi.org\/10.1016\/j.neucom.2023.126638\" title=\"Follow DOI:https:\/\/doi.org\/10.1016\/j.neucom.2023.126638\" target=\"_blank\">doi:https:\/\/doi.org\/10.1016\/j.neucom.2023.126638<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('244','tp_links')\">Close<\/a><\/p><\/div><\/div><div class=\"tp_pub_image_right\"><img decoding=\"async\" name=\"Continual adaptation of federated reservoirs in pervasive environments\" src=\"http:\/\/pages.di.unipi.it\/bacciu\/wp-content\/uploads\/sites\/12\/2024\/01\/neurocomputing.png\" width=\"80\" alt=\"Continual adaptation of federated reservoirs in pervasive environments\" \/><\/div><\/div><div class=\"tp_publication tp_publication_article\"><div class=\"tp_pub_number\">11.<\/div><div class=\"tp_pub_info\"><p class=\"tp_pub_author\"> Lanciano, Giacomo;  Andreoli, Remo;  Cucinotta, Tommaso;  Bacciu, Davide;  Passarella, Andrea<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('245','tp_links')\" style=\"cursor:pointer;\">A 2-phase Strategy For Intelligent Cloud Operations<\/a> <span class=\"tp_pub_type article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">IEEE Access, <\/span><span class=\"tp_pub_additional_pages\">pp. 1-1, <\/span><span class=\"tp_pub_additional_year\">2023<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_resource_link\"><a id=\"tp_links_sh_245\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('245','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_245\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('245','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_245\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{10239346,<br \/>\r\ntitle = {A 2-phase Strategy For Intelligent Cloud Operations},<br \/>\r\nauthor = {Giacomo Lanciano and Remo Andreoli and Tommaso Cucinotta and Davide Bacciu and Andrea Passarella},<br \/>\r\ndoi = {10.1109\/ACCESS.2023.3312218},<br \/>\r\nyear  = {2023},<br \/>\r\ndate = {2023-01-01},<br \/>\r\nurldate = {2023-01-01},<br \/>\r\njournal = {IEEE Access},<br \/>\r\npages = {1-1},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('245','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_245\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.1109\/ACCESS.2023.3312218\" title=\"Follow DOI:10.1109\/ACCESS.2023.3312218\" target=\"_blank\">doi:10.1109\/ACCESS.2023.3312218<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('245','tp_links')\">Close<\/a><\/p><\/div><\/div><div class=\"tp_pub_image_right\"><img decoding=\"async\" name=\"A 2-phase Strategy For Intelligent Cloud Operations\" src=\"http:\/\/pages.di.unipi.it\/bacciu\/wp-content\/uploads\/sites\/12\/2024\/01\/access.jpg\" width=\"80\" alt=\"A 2-phase Strategy For Intelligent Cloud Operations\" \/><\/div><\/div><h3 class=\"tp_h3\" id=\"tp_h3_2022\">2022<\/h3><div class=\"tp_publication tp_publication_article\"><div class=\"tp_pub_number\">12.<\/div><div class=\"tp_pub_info\"><p class=\"tp_pub_author\"> Dukic, Haris;  Mokarizadeh, Shahab;  Deligiorgis, Georgios;  Sepe, Pierpaolo;  Bacciu, Davide;  Trincavelli, Marco<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('209','tp_links')\" style=\"cursor:pointer;\">Inductive-Transductive Learning for Very Sparse Fashion Graphs<\/a> <span class=\"tp_pub_type article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">Neurocomputing, <\/span><span class=\"tp_pub_additional_year\">2022<\/span>, <span class=\"tp_pub_additional_issn\">ISSN: 0925-2312<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_209\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('209','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_209\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('209','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_209\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('209','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_209\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{DUKIC2022,<br \/>\r\ntitle = {Inductive-Transductive Learning for Very Sparse Fashion Graphs},<br \/>\r\nauthor = {Haris Dukic and Shahab Mokarizadeh and Georgios Deligiorgis and Pierpaolo Sepe and Davide Bacciu and Marco Trincavelli},<br \/>\r\ndoi = {https:\/\/doi.org\/10.1016\/j.neucom.2022.06.050},<br \/>\r\nissn = {0925-2312},<br \/>\r\nyear  = {2022},<br \/>\r\ndate = {2022-06-27},<br \/>\r\nurldate = {2022-06-27},<br \/>\r\njournal = {Neurocomputing},<br \/>\r\nabstract = {The assortments of global retailers are composed of hundreds of thousands of products linked by several types of relationships such as style compatibility, \u201dbought together\u201d, \u201dwatched together\u201d, etc. Graphs are a natural representation for assortments, where products are nodes and relations are edges. Style compatibility relations are produced manually and do not cover the whole graph uniformly. We propose to use inductive learning to enhance a graph encoding style compatibility of a fashion assortment, leveraging rich node information comprising textual descriptions and visual data. Then, we show how the proposed graph enhancement substantially improves the performance on transductive tasks with a minor impact on graph sparsity. Although demonstrated in a challenging and novel industrial application case, the approach we propose is general enough to be applied to any node-level or edge-level prediction task in very sparse, large-scale networks.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('209','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_209\" style=\"display:none;\"><div class=\"tp_abstract_entry\">The assortments of global retailers are composed of hundreds of thousands of products linked by several types of relationships such as style compatibility, \u201dbought together\u201d, \u201dwatched together\u201d, etc. Graphs are a natural representation for assortments, where products are nodes and relations are edges. Style compatibility relations are produced manually and do not cover the whole graph uniformly. We propose to use inductive learning to enhance a graph encoding style compatibility of a fashion assortment, leveraging rich node information comprising textual descriptions and visual data. Then, we show how the proposed graph enhancement substantially improves the performance on transductive tasks with a minor impact on graph sparsity. Although demonstrated in a challenging and novel industrial application case, the approach we propose is general enough to be applied to any node-level or edge-level prediction task in very sparse, large-scale networks.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('209','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_209\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/https:\/\/doi.org\/10.1016\/j.neucom.2022.06.050\" title=\"Follow DOI:https:\/\/doi.org\/10.1016\/j.neucom.2022.06.050\" target=\"_blank\">doi:https:\/\/doi.org\/10.1016\/j.neucom.2022.06.050<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('209','tp_links')\">Close<\/a><\/p><\/div><\/div><div class=\"tp_pub_image_right\"><img decoding=\"async\" name=\"Inductive-Transductive Learning for Very Sparse Fashion Graphs\" src=\"http:\/\/pages.di.unipi.it\/bacciu\/wp-content\/uploads\/sites\/12\/2024\/01\/neurocomputing.png\" width=\"80\" alt=\"Inductive-Transductive Learning for Very Sparse Fashion Graphs\" \/><\/div><\/div><div class=\"tp_publication tp_publication_article\"><div class=\"tp_pub_number\">13.<\/div><div class=\"tp_pub_info\"><p class=\"tp_pub_author\"> Sattar, Asma;  Bacciu, Davide<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('208','tp_links')\" style=\"cursor:pointer;\">Graph Neural Network for Context-Aware Recommendation<\/a> <span class=\"tp_pub_type article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">Neural Processing Letters, <\/span><span class=\"tp_pub_additional_year\">2022<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_resource_link\"><a id=\"tp_links_sh_208\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('208','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_208\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('208','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_208\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{nokey,<br \/>\r\ntitle = {Graph Neural Network for Context-Aware Recommendation},<br \/>\r\nauthor = {Asma Sattar and Davide Bacciu},<br \/>\r\ndoi = {10.1007\/s11063-022-10917-3},<br \/>\r\nyear  = {2022},<br \/>\r\ndate = {2022-06-22},<br \/>\r\nurldate = {2022-06-22},<br \/>\r\njournal = {Neural Processing Letters},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('208','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_208\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.1007\/s11063-022-10917-3\" title=\"Follow DOI:10.1007\/s11063-022-10917-3\" target=\"_blank\">doi:10.1007\/s11063-022-10917-3<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('208','tp_links')\">Close<\/a><\/p><\/div><\/div><div class=\"tp_pub_image_right\"><img decoding=\"async\" name=\"Graph Neural Network for Context-Aware Recommendation\" src=\"http:\/\/pages.di.unipi.it\/bacciu\/wp-content\/uploads\/sites\/12\/2024\/01\/npl.jpg\" width=\"80\" alt=\"Graph Neural Network for Context-Aware Recommendation\" \/><\/div><\/div><div class=\"tp_publication tp_publication_article\"><div class=\"tp_pub_number\">14.<\/div><div class=\"tp_pub_info\"><p class=\"tp_pub_author\"> Ferrari, Elisa;  Gargani, Luna;  Barbieri, Greta;  Ghiadoni, Lorenzo;  Faita, Francesco;  Bacciu, Davide<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('206','tp_links')\" style=\"cursor:pointer;\">A causal learning framework for the analysis and interpretation of  COVID-19 clinical data<\/a> <span class=\"tp_pub_type article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">Plos One, <\/span><span class=\"tp_pub_additional_volume\">vol. 17, <\/span><span class=\"tp_pub_additional_number\">no. 5, <\/span><span class=\"tp_pub_additional_year\">2022<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_206\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('206','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_206\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('206','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_206\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('206','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_206\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{DBLP:journals\/corr\/abs-2105-06998,<br \/>\r\ntitle = {A causal learning framework for the analysis and interpretation of  COVID-19 clinical data},<br \/>\r\nauthor = {Elisa Ferrari and Luna Gargani and Greta Barbieri and Lorenzo Ghiadoni and Francesco Faita and Davide Bacciu},<br \/>\r\nurl = {https:\/\/arxiv.org\/abs\/2105.06998, Arxiv},<br \/>\r\ndoi = {doi.org\/10.1371\/journal.pone.0268327},<br \/>\r\nyear  = {2022},<br \/>\r\ndate = {2022-05-19},<br \/>\r\nurldate = {2022-05-19},<br \/>\r\njournal = {Plos One},<br \/>\r\nvolume = {17},<br \/>\r\nnumber = {5},<br \/>\r\nabstract = {We present a workflow for clinical data analysis that relies on Bayesian Structure Learning (BSL), an unsupervised learning approach, robust to noise and biases, that allows to incorporate prior medical knowledge into the learning process and that provides explainable results in the form of a graph showing the causal connections among the analyzed features. The workflow consists in a multi-step approach that goes from identifying the main causes of patient's outcome through BSL, to the realization of a tool suitable for clinical practice, based on a Binary Decision Tree (BDT), to recognize patients at high-risk with information available already at hospital admission time. We evaluate our approach on a feature-rich COVID-19 dataset, showing that the proposed framework provides a schematic overview of the multi-factorial processes that jointly contribute to the outcome. We discuss how these computational findings are confirmed by current understanding of the COVID-19 pathogenesis. Further, our approach yields to a highly interpretable tool correctly predicting the outcome of 85% of subjects based exclusively on 3 features: age, a previous history of chronic obstructive pulmonary disease and the PaO2\/FiO2 ratio at the time of arrival to the hospital. The inclusion of additional information from 4 routine blood tests (Creatinine, Glucose, pO2 and Sodium) increases predictive accuracy to 94.5%. },<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('206','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_206\" style=\"display:none;\"><div class=\"tp_abstract_entry\">We present a workflow for clinical data analysis that relies on Bayesian Structure Learning (BSL), an unsupervised learning approach, robust to noise and biases, that allows to incorporate prior medical knowledge into the learning process and that provides explainable results in the form of a graph showing the causal connections among the analyzed features. The workflow consists in a multi-step approach that goes from identifying the main causes of patient's outcome through BSL, to the realization of a tool suitable for clinical practice, based on a Binary Decision Tree (BDT), to recognize patients at high-risk with information available already at hospital admission time. We evaluate our approach on a feature-rich COVID-19 dataset, showing that the proposed framework provides a schematic overview of the multi-factorial processes that jointly contribute to the outcome. We discuss how these computational findings are confirmed by current understanding of the COVID-19 pathogenesis. Further, our approach yields to a highly interpretable tool correctly predicting the outcome of 85% of subjects based exclusively on 3 features: age, a previous history of chronic obstructive pulmonary disease and the PaO2\/FiO2 ratio at the time of arrival to the hospital. The inclusion of additional information from 4 routine blood tests (Creatinine, Glucose, pO2 and Sodium) increases predictive accuracy to 94.5%. <\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('206','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_206\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"ai ai-arxiv\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/arxiv.org\/abs\/2105.06998\" title=\"Arxiv\" target=\"_blank\">Arxiv<\/a><\/li><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/doi.org\/10.1371\/journal.pone.0268327\" title=\"Follow DOI:doi.org\/10.1371\/journal.pone.0268327\" target=\"_blank\">doi:doi.org\/10.1371\/journal.pone.0268327<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('206','tp_links')\">Close<\/a><\/p><\/div><\/div><div class=\"tp_pub_image_right\"><img decoding=\"async\" name=\"A causal learning framework for the analysis and interpretation of  COVID-19 clinical data\" src=\"http:\/\/pages.di.unipi.it\/bacciu\/wp-content\/uploads\/sites\/12\/2024\/01\/plos.png\" width=\"80\" alt=\"A causal learning framework for the analysis and interpretation of  COVID-19 clinical data\" \/><\/div><\/div><div class=\"tp_publication tp_publication_article\"><div class=\"tp_pub_number\">15.<\/div><div class=\"tp_pub_info\"><p class=\"tp_pub_author\"> Bacciu, Davide;  Morelli, Davide;  Pandelea, Vlad<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('212','tp_links')\" style=\"cursor:pointer;\">Modeling Mood Polarity and Declaration Occurrence by Neural Temporal Point Processes<\/a> <span class=\"tp_pub_type article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">IEEE Transactions on Neural Networks and Learning Systems, <\/span><span class=\"tp_pub_additional_pages\">pp. 1-8, <\/span><span class=\"tp_pub_additional_year\">2022<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_resource_link\"><a id=\"tp_links_sh_212\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('212','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_212\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('212','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_212\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{pandelea2022,<br \/>\r\ntitle = {Modeling Mood Polarity and Declaration Occurrence by Neural Temporal Point Processes},<br \/>\r\nauthor = {Davide Bacciu and Davide Morelli and Vlad Pandelea},<br \/>\r\ndoi = {10.1109\/TNNLS.2022.3172871},<br \/>\r\nyear  = {2022},<br \/>\r\ndate = {2022-05-13},<br \/>\r\nurldate = {2022-05-13},<br \/>\r\njournal = {IEEE Transactions on Neural Networks and Learning Systems},<br \/>\r\npages = {1-8},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('212','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_212\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.1109\/TNNLS.2022.3172871\" title=\"Follow DOI:10.1109\/TNNLS.2022.3172871\" target=\"_blank\">doi:10.1109\/TNNLS.2022.3172871<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('212','tp_links')\">Close<\/a><\/p><\/div><\/div><div class=\"tp_pub_image_right\"><img decoding=\"async\" name=\"Modeling Mood Polarity and Declaration Occurrence by Neural Temporal Point Processes\" src=\"http:\/\/pages.di.unipi.it\/bacciu\/wp-content\/uploads\/sites\/12\/2024\/01\/tnnls.jpg\" width=\"80\" alt=\"Modeling Mood Polarity and Declaration Occurrence by Neural Temporal Point Processes\" \/><\/div><\/div><div class=\"tp_publication tp_publication_article\"><div class=\"tp_pub_number\">16.<\/div><div class=\"tp_pub_info\"><p class=\"tp_pub_author\"> Bacciu, Davide;  Numeroso, Danilo<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('200','tp_links')\" style=\"cursor:pointer;\">Explaining Deep Graph Networks via Input Perturbation<\/a> <span class=\"tp_pub_type article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">IEEE Transactions on Neural Networks and Learning Systems, <\/span><span class=\"tp_pub_additional_year\">2022<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_200\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('200','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_200\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('200','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_200\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('200','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_200\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{Bacciu2022,<br \/>\r\ntitle = {Explaining Deep Graph Networks via Input Perturbation},<br \/>\r\nauthor = {Davide Bacciu and Danilo Numeroso<br \/>\r\n},<br \/>\r\ndoi = {10.1109\/TNNLS.2022.3165618},<br \/>\r\nyear  = {2022},<br \/>\r\ndate = {2022-04-21},<br \/>\r\nurldate = {2022-04-21},<br \/>\r\njournal = {IEEE Transactions on Neural Networks and Learning Systems},<br \/>\r\nabstract = {Deep Graph Networks are a family of machine learning models for structured data which are finding heavy application in life-sciences (drug repurposing, molecular property predictions) and on social network data (recommendation systems). The privacy and safety-critical nature of such domains motivates the need for developing effective explainability methods for this family of models. So far, progress in this field has been challenged by the combinatorial nature and complexity of graph structures. In this respect, we present a novel local explanation framework specifically tailored to graph data and deep graph networks. Our approach leverages reinforcement learning to generate meaningful local perturbations of the input graph, whose prediction we seek an interpretation for. These perturbed data points are obtained by optimising a multi-objective score taking into account similarities both at a structural level as well as at the level of the deep model outputs. By this means, we are able to populate a set of informative neighbouring samples for the query graph, which is then used to fit an interpretable model for the predictive behaviour of the deep network locally to the query graph prediction. We show the effectiveness of the proposed explainer by a qualitative analysis on two chemistry datasets, TOS and ESOL and by quantitative results on a benchmark dataset for explanations, CYCLIQ.<br \/>\r\n},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('200','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_200\" style=\"display:none;\"><div class=\"tp_abstract_entry\">Deep Graph Networks are a family of machine learning models for structured data which are finding heavy application in life-sciences (drug repurposing, molecular property predictions) and on social network data (recommendation systems). The privacy and safety-critical nature of such domains motivates the need for developing effective explainability methods for this family of models. So far, progress in this field has been challenged by the combinatorial nature and complexity of graph structures. In this respect, we present a novel local explanation framework specifically tailored to graph data and deep graph networks. Our approach leverages reinforcement learning to generate meaningful local perturbations of the input graph, whose prediction we seek an interpretation for. These perturbed data points are obtained by optimising a multi-objective score taking into account similarities both at a structural level as well as at the level of the deep model outputs. By this means, we are able to populate a set of informative neighbouring samples for the query graph, which is then used to fit an interpretable model for the predictive behaviour of the deep network locally to the query graph prediction. We show the effectiveness of the proposed explainer by a qualitative analysis on two chemistry datasets, TOS and ESOL and by quantitative results on a benchmark dataset for explanations, CYCLIQ.<br \/>\r\n<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('200','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_200\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.1109\/TNNLS.2022.3165618\" title=\"Follow DOI:10.1109\/TNNLS.2022.3165618\" target=\"_blank\">doi:10.1109\/TNNLS.2022.3165618<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('200','tp_links')\">Close<\/a><\/p><\/div><\/div><div class=\"tp_pub_image_right\"><img decoding=\"async\" name=\"Explaining Deep Graph Networks via Input Perturbation\" src=\"http:\/\/pages.di.unipi.it\/bacciu\/wp-content\/uploads\/sites\/12\/2024\/01\/tnnls.jpg\" width=\"80\" alt=\"Explaining Deep Graph Networks via Input Perturbation\" \/><\/div><\/div><div class=\"tp_publication tp_publication_article\"><div class=\"tp_pub_number\">17.<\/div><div class=\"tp_pub_info\"><p class=\"tp_pub_author\"> Collodi, Lorenzo;  Bacciu, Davide;  Bianchi, Matteo;  Averta, Giuseppe<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('194','tp_links')\" style=\"cursor:pointer;\">Learning with few examples the semantic description of novel human-inspired grasp strategies from RGB data<\/a> <span class=\"tp_pub_type article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\"> IEEE Robotics and Automation Letters, <\/span><span class=\"tp_pub_additional_pages\">pp.  2573 - 2580, <\/span><span class=\"tp_pub_additional_year\">2022<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_194\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('194','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_194\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('194','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_194\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('194','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_194\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{Collodi2022,<br \/>\r\ntitle = {Learning with few examples the semantic description of novel human-inspired grasp strategies from RGB data},<br \/>\r\nauthor = { Lorenzo Collodi and Davide Bacciu and Matteo Bianchi and Giuseppe Averta},<br \/>\r\nurl = {https:\/\/www.researchgate.net\/profile\/Giuseppe-Averta\/publication\/358006552_Learning_With_Few_Examples_the_Semantic_Description_of_Novel_Human-Inspired_Grasp_Strategies_From_RGB_Data\/links\/61eae01e8d338833e3857251\/Learning-With-Few-Examples-the-Semantic-Description-of-Novel-Human-Inspired-Grasp-Strategies-From-RGB-Data.pdf, Open Version},<br \/>\r\ndoi = {https:\/\/doi.org\/10.1109\/LRA.2022.3144520},<br \/>\r\nyear  = {2022},<br \/>\r\ndate = {2022-04-04},<br \/>\r\nurldate = {2022-04-04},<br \/>\r\njournal = { IEEE Robotics and Automation Letters},<br \/>\r\npages = { 2573 - 2580},<br \/>\r\npublisher = {IEEE},<br \/>\r\nabstract = {Data-driven approaches and human inspiration are fundamental to endow robotic manipulators with advanced autonomous grasping capabilities. However, to capitalize upon these two pillars, several aspects need to be considered, which include the number of human examples used for training; the need for having in advance all the required information for classification (hardly feasible in unstructured environments); the trade-off between the task performance and the processing cost. In this paper, we propose a RGB-based pipeline that can identify the object to be grasped and guide the actual execution of the grasping primitive selected through a combination of Convolutional and Gated Graph Neural Networks. We consider a set of human-inspired grasp strategies, which are afforded by the geometrical properties of the objects and identified from a human grasping taxonomy, and propose to learn new grasping skills with only a few examples. We test our framework with a manipulator endowed with an under-actuated soft robotic hand. Even though we use only 2D information to minimize the footprint of the network, we achieve 90% of successful identifications of the most appropriate human-inspired grasping strategy over ten different classes, of which three were few-shot learned, outperforming an ideal model trained with all the classes, in sample-scarce conditions.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('194','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_194\" style=\"display:none;\"><div class=\"tp_abstract_entry\">Data-driven approaches and human inspiration are fundamental to endow robotic manipulators with advanced autonomous grasping capabilities. However, to capitalize upon these two pillars, several aspects need to be considered, which include the number of human examples used for training; the need for having in advance all the required information for classification (hardly feasible in unstructured environments); the trade-off between the task performance and the processing cost. In this paper, we propose a RGB-based pipeline that can identify the object to be grasped and guide the actual execution of the grasping primitive selected through a combination of Convolutional and Gated Graph Neural Networks. We consider a set of human-inspired grasp strategies, which are afforded by the geometrical properties of the objects and identified from a human grasping taxonomy, and propose to learn new grasping skills with only a few examples. We test our framework with a manipulator endowed with an under-actuated soft robotic hand. Even though we use only 2D information to minimize the footprint of the network, we achieve 90% of successful identifications of the most appropriate human-inspired grasping strategy over ten different classes, of which three were few-shot learned, outperforming an ideal model trained with all the classes, in sample-scarce conditions.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('194','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_194\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"fas fa-file-pdf\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/www.researchgate.net\/profile\/Giuseppe-Averta\/publication\/358006552_Learning_With_Few_Examples_the_Semantic_Description_of_Novel_Human-Inspired_Grasp_Strategies_From_RGB_Data\/links\/61eae01e8d338833e3857251\/Learning-With-Few-Examples-the-Semantic-Description-of-Novel-Human-Inspired-Grasp-Strategies-From-RGB-Data.pdf\" title=\"Open Version\" target=\"_blank\">Open Version<\/a><\/li><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/https:\/\/doi.org\/10.1109\/LRA.2022.3144520\" title=\"Follow DOI:https:\/\/doi.org\/10.1109\/LRA.2022.3144520\" target=\"_blank\">doi:https:\/\/doi.org\/10.1109\/LRA.2022.3144520<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('194','tp_links')\">Close<\/a><\/p><\/div><\/div><div class=\"tp_pub_image_right\"><img decoding=\"async\" name=\"Learning with few examples the semantic description of novel human-inspired grasp strategies from RGB data\" src=\"https:\/\/team.inria.fr\/rainbow\/files\/2019\/05\/RAL2018.png\" width=\"80\" alt=\"Learning with few examples the semantic description of novel human-inspired grasp strategies from RGB data\" \/><\/div><\/div><div class=\"tp_publication tp_publication_article\"><div class=\"tp_pub_number\">18.<\/div><div class=\"tp_pub_info\"><p class=\"tp_pub_author\"> Gravina, Alessio;  Wilson, Jennifer L.;  Bacciu, Davide;  Grimes, Kevin J.;  Priami, Corrado<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('201','tp_links')\" style=\"cursor:pointer;\">Controlling astrocyte-mediated synaptic pruning signals for schizophrenia drug repurposing with Deep Graph Networks<\/a> <span class=\"tp_pub_type article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">Plos Computational Biology, <\/span><span class=\"tp_pub_additional_volume\">vol. 18, <\/span><span class=\"tp_pub_additional_number\">no. 5, <\/span><span class=\"tp_pub_additional_year\">2022<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_201\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('201','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_201\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('201','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_201\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('201','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_201\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{Gravina2022,<br \/>\r\ntitle = {Controlling astrocyte-mediated synaptic pruning signals for schizophrenia drug repurposing with Deep Graph Networks},<br \/>\r\nauthor = {Alessio Gravina and Jennifer L. Wilson and Davide Bacciu and Kevin J. Grimes and Corrado Priami},<br \/>\r\nurl = {https:\/\/www.biorxiv.org\/content\/10.1101\/2021.10.07.463459v1, BioArxiv},<br \/>\r\ndoi = {doi.org\/10.1371\/journal.pcbi.1009531},<br \/>\r\nyear  = {2022},<br \/>\r\ndate = {2022-04-01},<br \/>\r\nurldate = {2022-04-01},<br \/>\r\njournal = {Plos Computational Biology},<br \/>\r\nvolume = {18},<br \/>\r\nnumber = {5},<br \/>\r\nabstract = {Schizophrenia is a debilitating psychiatric disorder, leading to both physical and social morbidity.  Worldwide 1% of the population is struggling with the disease, with 100,000 new cases annually only in the United States. Despite its importance, the goal of finding effective treatments for schizophrenia remains a challenging task, and previous work conducted expensive large-scale phenotypic screens. This work investigates the benefits of Machine Learning for graphs to optimize drug phenotypic screens and predict compounds that mitigate abnormal brain reduction induced by excessive glial phagocytic activity in schizophrenia subjects. Given a compound and its concentration as input, we propose a method that predicts a score associated with three possible compound effects, ie reduce, increase, or not influence phagocytosis.  We leverage a high-throughput screening to prove experimentally that our method achieves good generalization capabilities. The screening involves 2218 compounds at five different concentrations. Then, we analyze the usability of our approach in a practical setting, ie prioritizing the selection of compounds in the SWEETLEAD library. We provide a list of 64 compounds from the library that have the most potential clinical utility for glial phagocytosis mitigation. Lastly, we propose a novel approach to computationally validate their utility as possible therapies for schizophrenia.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('201','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_201\" style=\"display:none;\"><div class=\"tp_abstract_entry\">Schizophrenia is a debilitating psychiatric disorder, leading to both physical and social morbidity.  Worldwide 1% of the population is struggling with the disease, with 100,000 new cases annually only in the United States. Despite its importance, the goal of finding effective treatments for schizophrenia remains a challenging task, and previous work conducted expensive large-scale phenotypic screens. This work investigates the benefits of Machine Learning for graphs to optimize drug phenotypic screens and predict compounds that mitigate abnormal brain reduction induced by excessive glial phagocytic activity in schizophrenia subjects. Given a compound and its concentration as input, we propose a method that predicts a score associated with three possible compound effects, ie reduce, increase, or not influence phagocytosis.  We leverage a high-throughput screening to prove experimentally that our method achieves good generalization capabilities. The screening involves 2218 compounds at five different concentrations. Then, we analyze the usability of our approach in a practical setting, ie prioritizing the selection of compounds in the SWEETLEAD library. We provide a list of 64 compounds from the library that have the most potential clinical utility for glial phagocytosis mitigation. Lastly, we propose a novel approach to computationally validate their utility as possible therapies for schizophrenia.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('201','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_201\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"fas fa-globe\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/www.biorxiv.org\/content\/10.1101\/2021.10.07.463459v1\" title=\"BioArxiv\" target=\"_blank\">BioArxiv<\/a><\/li><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/doi.org\/10.1371\/journal.pcbi.1009531\" title=\"Follow DOI:doi.org\/10.1371\/journal.pcbi.1009531\" target=\"_blank\">doi:doi.org\/10.1371\/journal.pcbi.1009531<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('201','tp_links')\">Close<\/a><\/p><\/div><\/div><div class=\"tp_pub_image_right\"><img decoding=\"async\" name=\"Controlling astrocyte-mediated synaptic pruning signals for schizophrenia drug repurposing with Deep Graph Networks\" src=\"http:\/\/pages.di.unipi.it\/bacciu\/wp-content\/uploads\/sites\/12\/2024\/01\/plos.png\" width=\"80\" alt=\"Controlling astrocyte-mediated synaptic pruning signals for schizophrenia drug repurposing with Deep Graph Networks\" \/><\/div><\/div><div class=\"tp_publication tp_publication_article\"><div class=\"tp_pub_number\">19.<\/div><div class=\"tp_pub_info\"><p class=\"tp_pub_author\"> Castellana, Daniele;  Bacciu, Davide<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('171','tp_links')\" style=\"cursor:pointer;\">A Tensor Framework for Learning in Structured Domains<\/a> <span class=\"tp_pub_type article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">Neurocomputing, <\/span><span class=\"tp_pub_additional_volume\">vol. 470, <\/span><span class=\"tp_pub_additional_pages\">pp. 405-426, <\/span><span class=\"tp_pub_additional_year\">2022<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_171\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('171','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_171\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('171','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_171\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('171','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_171\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{Castellana2021,<br \/>\r\ntitle = {A Tensor Framework for Learning in Structured Domains},<br \/>\r\nauthor = {Daniele Castellana and Davide Bacciu},<br \/>\r\neditor = {Kerstin Bunte and Niccolo Navarin and Luca Oneto},<br \/>\r\ndoi = {10.1016\/j.neucom.2021.05.110},<br \/>\r\nyear  = {2022},<br \/>\r\ndate = {2022-01-22},<br \/>\r\nurldate = {2022-01-22},<br \/>\r\njournal = {Neurocomputing},<br \/>\r\nvolume = {470},<br \/>\r\npages = {405-426},<br \/>\r\nabstract = {Learning machines for structured data (e.g., trees) are intrinsically based on their capacity to learn representations by aggregating information from the multi-way relationships emerging from the structure topology. While complex aggregation functions are desirable in this context to increase the expressiveness of the learned representations, the modelling of higher-order interactions among structure constituents is unfeasible, in practice, due to the exponential number of parameters required. Therefore, the common approach is to define models which rely only on first-order interactions among structure constituents.<br \/>\r\nIn this work, we leverage tensors theory to define a framework for learning in structured domains. Such a framework is built on the observation that more expressive models require a tensor parameterisation. This observation is the stepping stone for the application of tensor decompositions in the context of recursive models. From this point of view, the advantage of using tensor decompositions is twofold since it allows limiting the number of model parameters while injecting inductive biases that do not ignore higher-order interactions.<br \/>\r\nWe apply the proposed framework on probabilistic and neural models for structured data, defining different models which leverage tensor decompositions. The experimental validation clearly shows the advantage of these models compared to first-order and full-tensorial models.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('171','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_171\" style=\"display:none;\"><div class=\"tp_abstract_entry\">Learning machines for structured data (e.g., trees) are intrinsically based on their capacity to learn representations by aggregating information from the multi-way relationships emerging from the structure topology. While complex aggregation functions are desirable in this context to increase the expressiveness of the learned representations, the modelling of higher-order interactions among structure constituents is unfeasible, in practice, due to the exponential number of parameters required. Therefore, the common approach is to define models which rely only on first-order interactions among structure constituents.<br \/>\r\nIn this work, we leverage tensors theory to define a framework for learning in structured domains. Such a framework is built on the observation that more expressive models require a tensor parameterisation. This observation is the stepping stone for the application of tensor decompositions in the context of recursive models. From this point of view, the advantage of using tensor decompositions is twofold since it allows limiting the number of model parameters while injecting inductive biases that do not ignore higher-order interactions.<br \/>\r\nWe apply the proposed framework on probabilistic and neural models for structured data, defining different models which leverage tensor decompositions. The experimental validation clearly shows the advantage of these models compared to first-order and full-tensorial models.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('171','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_171\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.1016\/j.neucom.2021.05.110\" title=\"Follow DOI:10.1016\/j.neucom.2021.05.110\" target=\"_blank\">doi:10.1016\/j.neucom.2021.05.110<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('171','tp_links')\">Close<\/a><\/p><\/div><\/div><div class=\"tp_pub_image_right\"><img decoding=\"async\" name=\"A Tensor Framework for Learning in Structured Domains\" src=\"http:\/\/pages.di.unipi.it\/bacciu\/wp-content\/uploads\/sites\/12\/2024\/01\/neurocomputing.png\" width=\"80\" alt=\"A Tensor Framework for Learning in Structured Domains\" \/><\/div><\/div><div class=\"tp_publication tp_publication_article\"><div class=\"tp_pub_number\">20.<\/div><div class=\"tp_pub_info\"><p class=\"tp_pub_author\"> Carta, Antonio;  Cossu, Andrea;  Errica, Federico;  Bacciu, Davide<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('195','tp_links')\" style=\"cursor:pointer;\">Catastrophic Forgetting in Deep Graph Networks: a Graph Classification benchmark<\/a> <span class=\"tp_pub_type article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">Frontiers in Artificial Intelligence , <\/span><span class=\"tp_pub_additional_year\">2022<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_195\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('195','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_195\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('195','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_195\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('195','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_195\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{Carta2022,<br \/>\r\ntitle = {Catastrophic Forgetting in Deep Graph Networks: a Graph Classification benchmark},<br \/>\r\nauthor = {Antonio Carta and Andrea Cossu and Federico Errica and Davide Bacciu},<br \/>\r\ndoi = {10.3389\/frai.2022.824655},<br \/>\r\nyear  = {2022},<br \/>\r\ndate = {2022-01-11},<br \/>\r\nurldate = {2022-01-11},<br \/>\r\njournal = {Frontiers in Artificial Intelligence },<br \/>\r\nabstract = { In this work, we study the phenomenon of catastrophic forgetting in the graph representation learning scenario. The primary objective of the analysis is to understand whether classical continual learning techniques for flat and sequential data have a tangible impact on performances when applied to graph data. To do so, we experiment with a structure-agnostic model and a deep graph network in a robust and controlled environment on three different datasets. The benchmark is complemented by an investigation on the effect of structure-preserving regularization techniques on catastrophic forgetting. We find that replay is the most effective strategy in so far, which also benefits the most from the use of regularization. Our findings suggest interesting future research at the intersection of the continual and graph representation learning fields. Finally, we provide researchers with a flexible software framework to reproduce our results and carry out further experiments. },<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('195','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_195\" style=\"display:none;\"><div class=\"tp_abstract_entry\"> In this work, we study the phenomenon of catastrophic forgetting in the graph representation learning scenario. The primary objective of the analysis is to understand whether classical continual learning techniques for flat and sequential data have a tangible impact on performances when applied to graph data. To do so, we experiment with a structure-agnostic model and a deep graph network in a robust and controlled environment on three different datasets. The benchmark is complemented by an investigation on the effect of structure-preserving regularization techniques on catastrophic forgetting. We find that replay is the most effective strategy in so far, which also benefits the most from the use of regularization. Our findings suggest interesting future research at the intersection of the continual and graph representation learning fields. Finally, we provide researchers with a flexible software framework to reproduce our results and carry out further experiments. <\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('195','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_195\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.3389\/frai.2022.824655\" title=\"Follow DOI:10.3389\/frai.2022.824655\" target=\"_blank\">doi:10.3389\/frai.2022.824655<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('195','tp_links')\">Close<\/a><\/p><\/div><\/div><div class=\"tp_pub_image_right\"><img decoding=\"async\" name=\"Catastrophic Forgetting in Deep Graph Networks: a Graph Classification benchmark\" src=\"http:\/\/pages.di.unipi.it\/bacciu\/wp-content\/uploads\/sites\/12\/2024\/01\/frontAI.jpg\" width=\"80\" alt=\"Catastrophic Forgetting in Deep Graph Networks: a Graph Classification benchmark\" \/><\/div><\/div><div class=\"tp_publication tp_publication_article\"><div class=\"tp_pub_number\">21.<\/div><div class=\"tp_pub_info\"><p class=\"tp_pub_author\"> Cossu, Andrea;  Graffieti, Gabriele;  Pellegrini, Lorenzo;  Maltoni, Davide;  Bacciu, Davide;  Carta, Antonio;  Lomonaco, Vincenzo<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('197','tp_links')\" style=\"cursor:pointer;\">Is Class-Incremental Enough for Continual Learning?<\/a> <span class=\"tp_pub_type article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">Frontiers in Artificial Intelligence, <\/span><span class=\"tp_pub_additional_volume\">vol. 5, <\/span><span class=\"tp_pub_additional_year\">2022<\/span>, <span class=\"tp_pub_additional_issn\">ISSN: 2624-8212<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_197\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('197','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_197\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('197','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_197\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('197','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_197\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{10.3389\/frai.2022.829842,<br \/>\r\ntitle = {Is Class-Incremental Enough for Continual Learning?},<br \/>\r\nauthor = {Andrea Cossu and Gabriele Graffieti and Lorenzo Pellegrini and Davide Maltoni and Davide Bacciu and Antonio Carta and Vincenzo Lomonaco},<br \/>\r\nurl = {https:\/\/www.frontiersin.org\/article\/10.3389\/frai.2022.829842},<br \/>\r\ndoi = {10.3389\/frai.2022.829842},<br \/>\r\nissn = {2624-8212},<br \/>\r\nyear  = {2022},<br \/>\r\ndate = {2022-01-01},<br \/>\r\nurldate = {2022-01-01},<br \/>\r\njournal = {Frontiers in Artificial Intelligence},<br \/>\r\nvolume = {5},<br \/>\r\nabstract = {The ability of a model to learn continually can be empirically assessed in different continual learning scenarios. Each scenario defines the constraints and the opportunities of the learning environment. Here, we challenge the current trend in the continual learning literature to experiment mainly on class-incremental scenarios, where classes present in one experience are never revisited. We posit that an excessive focus on this setting may be limiting for future research on continual learning, since class-incremental scenarios artificially exacerbate catastrophic forgetting, at the expense of other important objectives like forward transfer and computational efficiency. In many real-world environments, in fact, repetition of previously encountered concepts occurs naturally and contributes to softening the disruption of previous knowledge. We advocate for a more in-depth study of alternative continual learning scenarios, in which repetition is integrated by design in the stream of incoming information. Starting from already existing proposals, we describe the advantages such class-incremental with repetition scenarios could offer for a more comprehensive assessment of continual learning models.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('197','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_197\" style=\"display:none;\"><div class=\"tp_abstract_entry\">The ability of a model to learn continually can be empirically assessed in different continual learning scenarios. Each scenario defines the constraints and the opportunities of the learning environment. Here, we challenge the current trend in the continual learning literature to experiment mainly on class-incremental scenarios, where classes present in one experience are never revisited. We posit that an excessive focus on this setting may be limiting for future research on continual learning, since class-incremental scenarios artificially exacerbate catastrophic forgetting, at the expense of other important objectives like forward transfer and computational efficiency. In many real-world environments, in fact, repetition of previously encountered concepts occurs naturally and contributes to softening the disruption of previous knowledge. We advocate for a more in-depth study of alternative continual learning scenarios, in which repetition is integrated by design in the stream of incoming information. Starting from already existing proposals, we describe the advantages such class-incremental with repetition scenarios could offer for a more comprehensive assessment of continual learning models.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('197','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_197\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"fas fa-globe\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/www.frontiersin.org\/article\/10.3389\/frai.2022.829842\" title=\"https:\/\/www.frontiersin.org\/article\/10.3389\/frai.2022.829842\" target=\"_blank\">https:\/\/www.frontiersin.org\/article\/10.3389\/frai.2022.829842<\/a><\/li><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.3389\/frai.2022.829842\" title=\"Follow DOI:10.3389\/frai.2022.829842\" target=\"_blank\">doi:10.3389\/frai.2022.829842<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('197','tp_links')\">Close<\/a><\/p><\/div><\/div><div class=\"tp_pub_image_right\"><img decoding=\"async\" name=\"Is Class-Incremental Enough for Continual Learning?\" src=\"http:\/\/pages.di.unipi.it\/bacciu\/wp-content\/uploads\/sites\/12\/2024\/01\/frontAI.jpg\" width=\"80\" alt=\"Is Class-Incremental Enough for Continual Learning?\" \/><\/div><\/div><div class=\"tp_publication tp_publication_article\"><div class=\"tp_pub_number\">22.<\/div><div class=\"tp_pub_info\"><p class=\"tp_pub_author\"> Atzeni, Daniele;  Bacciu, Davide;  Mazzei, Daniele;  Prencipe, Giuseppe<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('213','tp_links')\" style=\"cursor:pointer;\">A Systematic Review of Wi-Fi and Machine Learning Integration with Topic Modeling Techniques<\/a> <span class=\"tp_pub_type article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">Sensors, <\/span><span class=\"tp_pub_additional_volume\">vol. 22, <\/span><span class=\"tp_pub_additional_number\">no. 13, <\/span><span class=\"tp_pub_additional_year\">2022<\/span>, <span class=\"tp_pub_additional_issn\">ISSN: 1424-8220<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_213\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('213','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_213\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('213','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_213\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('213','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_213\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{atzeni2022,<br \/>\r\ntitle = {A Systematic Review of Wi-Fi and Machine Learning Integration with Topic Modeling Techniques},<br \/>\r\nauthor = {Daniele Atzeni and Davide Bacciu and Daniele Mazzei and Giuseppe Prencipe},<br \/>\r\nurl = {https:\/\/www.mdpi.com\/1424-8220\/22\/13\/4925},<br \/>\r\ndoi = {10.3390\/s22134925},<br \/>\r\nissn = {1424-8220},<br \/>\r\nyear  = {2022},<br \/>\r\ndate = {2022-01-01},<br \/>\r\nurldate = {2022-01-01},<br \/>\r\njournal = {Sensors},<br \/>\r\nvolume = {22},<br \/>\r\nnumber = {13},<br \/>\r\nabstract = {Wireless networks have drastically influenced our lifestyle, changing our workplaces and society. Among the variety of wireless technology, Wi-Fi surely plays a leading role, especially in local area networks. The spread of mobiles and tablets, and more recently, the advent of Internet of Things, have resulted in a multitude of Wi-Fi-enabled devices continuously sending data to the Internet and between each other. At the same time, Machine Learning has proven to be one of the most effective and versatile tools for the analysis of fast streaming data. This systematic review aims at studying the interaction between these technologies and how it has developed throughout their lifetimes. We used Scopus, Web of Science, and IEEE Xplore databases to retrieve paper abstracts and leveraged a topic modeling technique, namely, BERTopic, to analyze the resulting document corpus. After these steps, we inspected the obtained clusters and computed statistics to characterize and interpret the topics they refer to. Our results include both the applications of Wi-Fi sensing and the variety of Machine Learning algorithms used to tackle them. We also report how the Wi-Fi advances have affected sensing applications and the choice of the most suitable Machine Learning models.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('213','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_213\" style=\"display:none;\"><div class=\"tp_abstract_entry\">Wireless networks have drastically influenced our lifestyle, changing our workplaces and society. Among the variety of wireless technology, Wi-Fi surely plays a leading role, especially in local area networks. The spread of mobiles and tablets, and more recently, the advent of Internet of Things, have resulted in a multitude of Wi-Fi-enabled devices continuously sending data to the Internet and between each other. At the same time, Machine Learning has proven to be one of the most effective and versatile tools for the analysis of fast streaming data. This systematic review aims at studying the interaction between these technologies and how it has developed throughout their lifetimes. We used Scopus, Web of Science, and IEEE Xplore databases to retrieve paper abstracts and leveraged a topic modeling technique, namely, BERTopic, to analyze the resulting document corpus. After these steps, we inspected the obtained clusters and computed statistics to characterize and interpret the topics they refer to. Our results include both the applications of Wi-Fi sensing and the variety of Machine Learning algorithms used to tackle them. We also report how the Wi-Fi advances have affected sensing applications and the choice of the most suitable Machine Learning models.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('213','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_213\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"fas fa-globe\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/www.mdpi.com\/1424-8220\/22\/13\/4925\" title=\"https:\/\/www.mdpi.com\/1424-8220\/22\/13\/4925\" target=\"_blank\">https:\/\/www.mdpi.com\/1424-8220\/22\/13\/4925<\/a><\/li><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.3390\/s22134925\" title=\"Follow DOI:10.3390\/s22134925\" target=\"_blank\">doi:10.3390\/s22134925<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('213','tp_links')\">Close<\/a><\/p><\/div><\/div><div class=\"tp_pub_image_right\"><img decoding=\"async\" name=\"A Systematic Review of Wi-Fi and Machine Learning Integration with Topic Modeling Techniques\" src=\"http:\/\/pages.di.unipi.it\/bacciu\/wp-content\/uploads\/sites\/12\/2024\/01\/sensors.png\" width=\"80\" alt=\"A Systematic Review of Wi-Fi and Machine Learning Integration with Topic Modeling Techniques\" \/><\/div><\/div><h3 class=\"tp_h3\" id=\"tp_h3_2021\">2021<\/h3><div class=\"tp_publication tp_publication_article\"><div class=\"tp_pub_number\">23.<\/div><div class=\"tp_pub_info\"><p class=\"tp_pub_author\"> Cossu, Andrea;  Carta, Antonio;  Lomonaco, Vincenzo;  Bacciu, Davide<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('182','tp_links')\" style=\"cursor:pointer;\">Continual Learning for Recurrent Neural Networks: an Empirical Evaluation<\/a> <span class=\"tp_pub_type article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">Neural Networks, <\/span><span class=\"tp_pub_additional_volume\">vol. 143, <\/span><span class=\"tp_pub_additional_pages\">pp. 607-627, <\/span><span class=\"tp_pub_additional_year\">2021<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_182\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('182','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_182\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('182','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_182\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('182','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_182\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{Cossu2021b,<br \/>\r\ntitle = {Continual Learning for Recurrent Neural Networks: an Empirical Evaluation},<br \/>\r\nauthor = {Andrea Cossu and Antonio Carta and Vincenzo Lomonaco and Davide Bacciu},<br \/>\r\nurl = {https:\/\/arxiv.org\/abs\/2103.07492, Arxiv},<br \/>\r\nyear  = {2021},<br \/>\r\ndate = {2021-12-03},<br \/>\r\nurldate = {2021-12-03},<br \/>\r\njournal = {Neural Networks},<br \/>\r\nvolume = {143},<br \/>\r\npages = {607-627},<br \/>\r\nabstract = {     Learning continuously during all model lifetime is fundamental to deploy machine learning solutions robust to drifts in the data distribution. Advances in Continual Learning (CL) with recurrent neural networks could pave the way to a large number of applications where incoming data is non stationary, like natural language processing and robotics. However, the existing body of work on the topic is still fragmented, with approaches which are application-specific and whose assessment is based on heterogeneous learning protocols and datasets. In this paper, we organize the literature on CL for sequential data processing by providing a categorization of the contributions and a review of the benchmarks. We propose two new benchmarks for CL with sequential data based on existing datasets, whose characteristics resemble real-world applications. We also provide a broad empirical evaluation of CL and Recurrent Neural Networks in class-incremental scenario, by testing their ability to mitigate forgetting with a number of different strategies which are not specific to sequential data processing. Our results highlight the key role played by the sequence length and the importance of a clear specification of the CL scenario. },<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('182','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_182\" style=\"display:none;\"><div class=\"tp_abstract_entry\">     Learning continuously during all model lifetime is fundamental to deploy machine learning solutions robust to drifts in the data distribution. Advances in Continual Learning (CL) with recurrent neural networks could pave the way to a large number of applications where incoming data is non stationary, like natural language processing and robotics. However, the existing body of work on the topic is still fragmented, with approaches which are application-specific and whose assessment is based on heterogeneous learning protocols and datasets. In this paper, we organize the literature on CL for sequential data processing by providing a categorization of the contributions and a review of the benchmarks. We propose two new benchmarks for CL with sequential data based on existing datasets, whose characteristics resemble real-world applications. We also provide a broad empirical evaluation of CL and Recurrent Neural Networks in class-incremental scenario, by testing their ability to mitigate forgetting with a number of different strategies which are not specific to sequential data processing. Our results highlight the key role played by the sequence length and the importance of a clear specification of the CL scenario. <\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('182','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_182\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"ai ai-arxiv\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/arxiv.org\/abs\/2103.07492\" title=\"Arxiv\" target=\"_blank\">Arxiv<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('182','tp_links')\">Close<\/a><\/p><\/div><\/div><div class=\"tp_pub_image_right\"><img decoding=\"async\" name=\"Continual Learning for Recurrent Neural Networks: an Empirical Evaluation\" src=\"http:\/\/pages.di.unipi.it\/bacciu\/wp-content\/uploads\/sites\/12\/2020\/06\/08936080.jpg\" width=\"80\" alt=\"Continual Learning for Recurrent Neural Networks: an Empirical Evaluation\" \/><\/div><\/div><div class=\"tp_publication tp_publication_article\"><div class=\"tp_pub_number\">24.<\/div><div class=\"tp_pub_info\"><p class=\"tp_pub_author\"> Carta, Antonio;  Sperduti, Alessandro;  Bacciu, Davide<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('170','tp_links')\" style=\"cursor:pointer;\">Encoding-based Memory for Recurrent Neural Networks<\/a> <span class=\"tp_pub_type article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">Neurocomputing, <\/span><span class=\"tp_pub_additional_volume\">vol. 456, <\/span><span class=\"tp_pub_additional_pages\">pp. 407-420, <\/span><span class=\"tp_pub_additional_year\">2021<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_170\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('170','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_170\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('170','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_170\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('170','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_170\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{Carta2021b,<br \/>\r\ntitle = {Encoding-based Memory for Recurrent Neural Networks},<br \/>\r\nauthor = {Antonio Carta and Alessandro Sperduti and Davide Bacciu},<br \/>\r\nurl = {https:\/\/arxiv.org\/abs\/2001.11771, Arxiv},<br \/>\r\ndoi = {10.1016\/j.neucom.2021.04.051},<br \/>\r\nyear  = {2021},<br \/>\r\ndate = {2021-10-07},<br \/>\r\nurldate = {2021-10-07},<br \/>\r\njournal = {Neurocomputing},<br \/>\r\nvolume = {456},<br \/>\r\npages = {407-420},<br \/>\r\npublisher = {Elsevier},<br \/>\r\nabstract = {Learning to solve sequential tasks with recurrent models requires the ability to memorize long sequences and to extract task-relevant features from them. In this paper, we study the memorization subtask from the point of view of the design and training of recurrent neural networks. We propose a new model, the Linear Memory Network, which features an encoding-based memorization component built with a linear autoencoder for sequences. We extend the memorization component with a modular memory that encodes the hidden state sequence at different sampling frequencies. Additionally, we provide a specialized training algorithm that initializes the memory to efficiently encode the hidden activations of the network. The experimental results on synthetic and real-world datasets show that specializing the training algorithm to train the memorization component always improves the final performance whenever the memorization of long sequences is necessary to solve the problem. },<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('170','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_170\" style=\"display:none;\"><div class=\"tp_abstract_entry\">Learning to solve sequential tasks with recurrent models requires the ability to memorize long sequences and to extract task-relevant features from them. In this paper, we study the memorization subtask from the point of view of the design and training of recurrent neural networks. We propose a new model, the Linear Memory Network, which features an encoding-based memorization component built with a linear autoencoder for sequences. We extend the memorization component with a modular memory that encodes the hidden state sequence at different sampling frequencies. Additionally, we provide a specialized training algorithm that initializes the memory to efficiently encode the hidden activations of the network. The experimental results on synthetic and real-world datasets show that specializing the training algorithm to train the memorization component always improves the final performance whenever the memorization of long sequences is necessary to solve the problem. <\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('170','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_170\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"ai ai-arxiv\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/arxiv.org\/abs\/2001.11771\" title=\"Arxiv\" target=\"_blank\">Arxiv<\/a><\/li><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.1016\/j.neucom.2021.04.051\" title=\"Follow DOI:10.1016\/j.neucom.2021.04.051\" target=\"_blank\">doi:10.1016\/j.neucom.2021.04.051<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('170','tp_links')\">Close<\/a><\/p><\/div><\/div><div class=\"tp_pub_image_right\"><img decoding=\"async\" name=\"Encoding-based Memory for Recurrent Neural Networks\" src=\"http:\/\/pages.di.unipi.it\/bacciu\/wp-content\/uploads\/sites\/12\/2024\/01\/neurocomputing.png\" width=\"80\" alt=\"Encoding-based Memory for Recurrent Neural Networks\" \/><\/div><\/div><div class=\"tp_publication tp_publication_article\"><div class=\"tp_pub_number\">25.<\/div><div class=\"tp_pub_info\"><p class=\"tp_pub_author\"> Averta, Giuseppe;  Barontini, Federica;  Valdambrini, Irene;  Cheli, Paolo;  Bacciu, Davide;  Bianchi, Matteo<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('190','tp_links')\" style=\"cursor:pointer;\">Learning to Prevent Grasp Failure with Soft Hands: From Online Prediction to Dual-Arm Grasp Recovery<\/a> <span class=\"tp_pub_type article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">Advanced Intelligent Systems, <\/span><span class=\"tp_pub_additional_year\">2021<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_190\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('190','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_190\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('190','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_190\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('190','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_190\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{Averta2021,<br \/>\r\ntitle = {Learning to Prevent Grasp Failure with Soft Hands: From Online Prediction to Dual-Arm Grasp Recovery},<br \/>\r\nauthor = {Giuseppe Averta and Federica Barontini and Irene Valdambrini and Paolo Cheli and Davide Bacciu and Matteo Bianchi},<br \/>\r\ndoi = {10.1002\/aisy.202100146},<br \/>\r\nyear  = {2021},<br \/>\r\ndate = {2021-10-07},<br \/>\r\nurldate = {2021-10-07},<br \/>\r\njournal = {Advanced Intelligent Systems},<br \/>\r\nabstract = {Soft hands allow to simplify the grasp planning to achieve a successful grasp, thanks to their intrinsic adaptability. At the same time, their usage poses new challenges, related to the adoption of classical sensing techniques originally developed for rigid end defectors, which provide fundamental information, such as to detect object slippage. Under this regard, model-based approaches for the processing of the gathered information are hard to use, due to the difficulties in modeling hand\u2013object interaction when softness is involved. To overcome these limitations, in this article, we proposed to combine distributed tactile sensing and machine learning (recurrent neural network) to detect sliding conditions for a soft robotic hand mounted on a robotic manipulator, targeting the prediction of the grasp failure event and the direction of sliding. The outcomes of these predictions allow for an online triggering of a compensatory action performed with a second robotic arm\u2013hand system, to prevent the failure. Despite the fact that the network is trained only with spherical and cylindrical objects, we demonstrate high generalization capabilities of our framework, achieving a correct prediction of the failure direction in 75% of cases, and a 85% of successful regrasps, for a selection of 12 objects of common use.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('190','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_190\" style=\"display:none;\"><div class=\"tp_abstract_entry\">Soft hands allow to simplify the grasp planning to achieve a successful grasp, thanks to their intrinsic adaptability. At the same time, their usage poses new challenges, related to the adoption of classical sensing techniques originally developed for rigid end defectors, which provide fundamental information, such as to detect object slippage. Under this regard, model-based approaches for the processing of the gathered information are hard to use, due to the difficulties in modeling hand\u2013object interaction when softness is involved. To overcome these limitations, in this article, we proposed to combine distributed tactile sensing and machine learning (recurrent neural network) to detect sliding conditions for a soft robotic hand mounted on a robotic manipulator, targeting the prediction of the grasp failure event and the direction of sliding. The outcomes of these predictions allow for an online triggering of a compensatory action performed with a second robotic arm\u2013hand system, to prevent the failure. Despite the fact that the network is trained only with spherical and cylindrical objects, we demonstrate high generalization capabilities of our framework, achieving a correct prediction of the failure direction in 75% of cases, and a 85% of successful regrasps, for a selection of 12 objects of common use.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('190','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_190\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.1002\/aisy.202100146\" title=\"Follow DOI:10.1002\/aisy.202100146\" target=\"_blank\">doi:10.1002\/aisy.202100146<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('190','tp_links')\">Close<\/a><\/p><\/div><\/div><div class=\"tp_pub_image_right\"><img decoding=\"async\" name=\"Learning to Prevent Grasp Failure with Soft Hands: From Online Prediction to Dual-Arm Grasp Recovery\" src=\"http:\/\/pages.di.unipi.it\/bacciu\/wp-content\/uploads\/sites\/12\/2022\/01\/aisi.png\" width=\"80\" alt=\"Learning to Prevent Grasp Failure with Soft Hands: From Online Prediction to Dual-Arm Grasp Recovery\" \/><\/div><\/div><div class=\"tp_publication tp_publication_article\"><div class=\"tp_pub_number\">26.<\/div><div class=\"tp_pub_info\"><p class=\"tp_pub_author\"> Bacciu, Davide;  Conte, Alessio;  Grossi, Roberto;  Landolfi, Francesco;  Marino, Andrea<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('176','tp_links')\" style=\"cursor:pointer;\">K-Plex Cover Pooling for Graph Neural Networks<\/a> <span class=\"tp_pub_type article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">Data Mining and Knowledge Discovery, <\/span><span class=\"tp_pub_additional_year\">2021<\/span><span class=\"tp_pub_additional_note\">, (Accepted also as paper to the European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML-PKDD 2021))<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_176\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('176','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_176\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('176','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_176\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('176','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_176\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{Bacciu2021b,<br \/>\r\ntitle = {K-Plex Cover Pooling for Graph Neural Networks},<br \/>\r\nauthor = {Davide Bacciu and Alessio Conte and Roberto Grossi and Francesco Landolfi and Andrea Marino},<br \/>\r\neditor = {Annalisa Appice and Sergio Escalera and Jos\u00e9 A. G\u00e1mez and Heike Trautmann},<br \/>\r\nurl = {https:\/\/link.springer.com\/article\/10.1007\/s10618-021-00779-z, Published version},<br \/>\r\ndoi = {10.1007\/s10618-021-00779-z},<br \/>\r\nyear  = {2021},<br \/>\r\ndate = {2021-09-13},<br \/>\r\nurldate = {2021-09-13},<br \/>\r\njournal = {Data Mining and Knowledge Discovery},<br \/>\r\nabstract = {raph pooling methods provide mechanisms for structure reduction that are intended to ease the diffusion of context between nodes further in the graph, and that typically leverage community discovery mechanisms or node and edge pruning heuristics. In this paper, we introduce a novel pooling technique which borrows from classical results in graph theory that is non-parametric and generalizes well to graphs of different nature and connectivity patterns. Our pooling method, named KPlexPool, builds on the concepts of graph covers and k-plexes, i.e. pseudo-cliques where each node can miss up to k links. The experimental evaluation on benchmarks on molecular and social graph classification shows that KPlexPool achieves state of the art performances against both parametric and non-parametric pooling methods in the literature, despite generating pooled graphs based solely on topological information.},<br \/>\r\nnote = {Accepted also as paper to the European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML-PKDD 2021)},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('176','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_176\" style=\"display:none;\"><div class=\"tp_abstract_entry\">raph pooling methods provide mechanisms for structure reduction that are intended to ease the diffusion of context between nodes further in the graph, and that typically leverage community discovery mechanisms or node and edge pruning heuristics. In this paper, we introduce a novel pooling technique which borrows from classical results in graph theory that is non-parametric and generalizes well to graphs of different nature and connectivity patterns. Our pooling method, named KPlexPool, builds on the concepts of graph covers and k-plexes, i.e. pseudo-cliques where each node can miss up to k links. The experimental evaluation on benchmarks on molecular and social graph classification shows that KPlexPool achieves state of the art performances against both parametric and non-parametric pooling methods in the literature, despite generating pooled graphs based solely on topological information.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('176','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_176\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"fas fa-globe\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/link.springer.com\/article\/10.1007\/s10618-021-00779-z\" title=\"Published version\" target=\"_blank\">Published version<\/a><\/li><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.1007\/s10618-021-00779-z\" title=\"Follow DOI:10.1007\/s10618-021-00779-z\" target=\"_blank\">doi:10.1007\/s10618-021-00779-z<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('176','tp_links')\">Close<\/a><\/p><\/div><\/div><div class=\"tp_pub_image_right\"><img decoding=\"async\" name=\"K-Plex Cover Pooling for Graph Neural Networks\" src=\"http:\/\/pages.di.unipi.it\/bacciu\/wp-content\/uploads\/sites\/12\/2024\/01\/ecml.png\" width=\"80\" alt=\"K-Plex Cover Pooling for Graph Neural Networks\" \/><\/div><\/div><div class=\"tp_publication tp_publication_article\"><div class=\"tp_pub_number\">27.<\/div><div class=\"tp_pub_info\"><p class=\"tp_pub_author\"> Resta, Michele;  Monreale, Anna;  Bacciu, Davide<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('184','tp_links')\" style=\"cursor:pointer;\"> Occlusion-based Explanations in Deep Recurrent Models for Biomedical Signals <\/a> <span class=\"tp_pub_type article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">Entropy, <\/span><span class=\"tp_pub_additional_volume\">vol. 23, <\/span><span class=\"tp_pub_additional_number\">no. 8, <\/span><span class=\"tp_pub_additional_pages\">pp. 1064, <\/span><span class=\"tp_pub_additional_year\">2021<\/span><span class=\"tp_pub_additional_note\">, (Special issue on Representation Learning)<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_184\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('184','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_184\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('184','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_184\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('184','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_184\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{Resta2021,<br \/>\r\ntitle = { Occlusion-based Explanations in Deep Recurrent Models for Biomedical Signals },<br \/>\r\nauthor = {Michele Resta and Anna Monreale and Davide Bacciu},<br \/>\r\neditor = {Fabio Aiolli and Mirko Polato},<br \/>\r\ndoi = {10.3390\/e23081064},<br \/>\r\nyear  = {2021},<br \/>\r\ndate = {2021-09-01},<br \/>\r\nurldate = {2021-09-01},<br \/>\r\njournal = {Entropy},<br \/>\r\nvolume = {23},<br \/>\r\nnumber = {8},<br \/>\r\npages = {1064},<br \/>\r\nabstract = { The biomedical field is characterized by an ever-increasing production of sequential data, which often come under the form of biosignals capturing the time-evolution of physiological processes, such as blood pressure and brain activity. This has motivated a large body of research dealing with the development of machine learning techniques for the predictive analysis of such biosignals. Unfortunately, in high-stakes decision making, such as clinical diagnosis, the opacity of machine learning models becomes a crucial aspect to be addressed in order to increase the trust and adoption of AI technology. In this paper we propose a model agnostic explanation method, based on occlusion, enabling the learning of the input influence on the model predictions. We specifically target problems involving the predictive analysis of time-series data and the models which are typically used to deal with data of such nature, i.e. recurrent neural networks. Our approach is able to provide two different kinds of explanations: one suitable for technical experts, who need to verify the quality and correctness of machine learning models, and one suited to physicians, who need to understand the rationale underlying the prediction to take aware decisions. A wide experimentation on different physiological data demonstrate the effectiveness of our approach, both in classification and regression tasks. },<br \/>\r\nnote = {Special issue on Representation Learning},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('184','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_184\" style=\"display:none;\"><div class=\"tp_abstract_entry\"> The biomedical field is characterized by an ever-increasing production of sequential data, which often come under the form of biosignals capturing the time-evolution of physiological processes, such as blood pressure and brain activity. This has motivated a large body of research dealing with the development of machine learning techniques for the predictive analysis of such biosignals. Unfortunately, in high-stakes decision making, such as clinical diagnosis, the opacity of machine learning models becomes a crucial aspect to be addressed in order to increase the trust and adoption of AI technology. In this paper we propose a model agnostic explanation method, based on occlusion, enabling the learning of the input influence on the model predictions. We specifically target problems involving the predictive analysis of time-series data and the models which are typically used to deal with data of such nature, i.e. recurrent neural networks. Our approach is able to provide two different kinds of explanations: one suitable for technical experts, who need to verify the quality and correctness of machine learning models, and one suited to physicians, who need to understand the rationale underlying the prediction to take aware decisions. A wide experimentation on different physiological data demonstrate the effectiveness of our approach, both in classification and regression tasks. <\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('184','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_184\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.3390\/e23081064\" title=\"Follow DOI:10.3390\/e23081064\" target=\"_blank\">doi:10.3390\/e23081064<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('184','tp_links')\">Close<\/a><\/p><\/div><\/div><div class=\"tp_pub_image_right\"><img decoding=\"async\" name=\" Occlusion-based Explanations in Deep Recurrent Models for Biomedical Signals \" src=\"http:\/\/pages.di.unipi.it\/bacciu\/wp-content\/uploads\/sites\/12\/2024\/01\/entropy.png\" width=\"80\" alt=\" Occlusion-based Explanations in Deep Recurrent Models for Biomedical Signals \" \/><\/div><\/div><div class=\"tp_publication tp_publication_article\"><div class=\"tp_pub_number\">28.<\/div><div class=\"tp_pub_info\"><p class=\"tp_pub_author\"> Errica, Federico;  Giulini, Marco;  Bacciu, Davide;  Menichetti, Roberto;  Micheli, Alessio;  Potestio, Raffaello<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('168','tp_links')\" style=\"cursor:pointer;\">A deep graph network-enhanced sampling approach to efficiently explore the space of reduced representations of proteins<\/a> <span class=\"tp_pub_type article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">Frontiers in Molecular Biosciences, <\/span><span class=\"tp_pub_additional_volume\">vol. 8, <\/span><span class=\"tp_pub_additional_pages\">pp. 136, <\/span><span class=\"tp_pub_additional_year\">2021<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_resource_link\"><a id=\"tp_links_sh_168\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('168','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_168\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('168','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_168\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{errica_deep_2021,<br \/>\r\ntitle = {A deep graph network-enhanced sampling approach to efficiently explore the space of reduced representations of proteins},<br \/>\r\nauthor = {Federico Errica and Marco Giulini and Davide Bacciu and Roberto Menichetti and Alessio Micheli and Raffaello Potestio},<br \/>\r\ndoi = {10.3389\/fmolb.2021.637396},<br \/>\r\nyear  = {2021},<br \/>\r\ndate = {2021-02-28},<br \/>\r\nurldate = {2021-02-28},<br \/>\r\njournal = {Frontiers in Molecular Biosciences},<br \/>\r\nvolume = {8},<br \/>\r\npages = {136},<br \/>\r\npublisher = {Frontiers},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('168','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_168\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.3389\/fmolb.2021.637396\" title=\"Follow DOI:10.3389\/fmolb.2021.637396\" target=\"_blank\">doi:10.3389\/fmolb.2021.637396<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('168','tp_links')\">Close<\/a><\/p><\/div><\/div><div class=\"tp_pub_image_right\"><img decoding=\"async\" name=\"A deep graph network-enhanced sampling approach to efficiently explore the space of reduced representations of proteins\" src=\"http:\/\/pages.di.unipi.it\/bacciu\/wp-content\/uploads\/sites\/12\/2024\/01\/frontMolbio.png\" width=\"80\" alt=\"A deep graph network-enhanced sampling approach to efficiently explore the space of reduced representations of proteins\" \/><\/div><\/div><div class=\"tp_publication tp_publication_article\"><div class=\"tp_pub_number\">29.<\/div><div class=\"tp_pub_info\"><p class=\"tp_pub_author\"> Bontempi, Gianluca;  Chavarriaga, Ricardo;  Canck, Hans De;  Girardi, Emanuela;  Hoos, Holger;  Kilbane-Dawe, Iarla;  Ball, Tonio;  Now\u00e9, Ann;  Sousa, Jose;  Bacciu, Davide;  Aldinucci, Marco;  Domenico, Manlio De;  Saffiotti, Alessandro;  Maratea, Marco<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('162','tp_links')\" style=\"cursor:pointer;\">The CLAIRE COVID-19 initiative: approach, experiences and recommendations<\/a> <span class=\"tp_pub_type article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">Ethics and Information Technology, <\/span><span class=\"tp_pub_additional_year\">2021<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_resource_link\"><a id=\"tp_links_sh_162\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('162','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_162\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('162','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_162\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{Bontempi2021,<br \/>\r\ntitle = {The CLAIRE COVID-19 initiative: approach, experiences and recommendations},<br \/>\r\nauthor = {Gianluca Bontempi and Ricardo Chavarriaga and Hans De Canck and Emanuela Girardi and Holger Hoos and Iarla Kilbane-Dawe and Tonio Ball and Ann Now\u00e9 and Jose Sousa and Davide Bacciu and Marco Aldinucci and Manlio De Domenico and Alessandro Saffiotti and Marco Maratea},<br \/>\r\ndoi = {10.1007\/s10676-020-09567-7},<br \/>\r\nyear  = {2021},<br \/>\r\ndate = {2021-02-09},<br \/>\r\njournal = {Ethics and Information Technology},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('162','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_162\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.1007\/s10676-020-09567-7\" title=\"Follow DOI:10.1007\/s10676-020-09567-7\" target=\"_blank\">doi:10.1007\/s10676-020-09567-7<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('162','tp_links')\">Close<\/a><\/p><\/div><\/div><div class=\"tp_pub_image_right\"><\/div><\/div><div class=\"tp_publication tp_publication_article\"><div class=\"tp_pub_number\">30.<\/div><div class=\"tp_pub_info\"><p class=\"tp_pub_author\">Michele Barsotti Andrea Valenti, Davide Bacciu;  Ascari, Luca<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('161','tp_links')\" style=\"cursor:pointer;\">A Deep Classifier for Upper-Limbs Motor Anticipation Tasks in an Online BCI Setting<\/a> <span class=\"tp_pub_type article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">Bioengineering , <\/span><span class=\"tp_pub_additional_year\">2021<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_resource_link\"><a id=\"tp_links_sh_161\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('161','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_161\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('161','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_161\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{Valenti2021,<br \/>\r\ntitle = {A Deep Classifier for Upper-Limbs Motor Anticipation Tasks in an Online BCI Setting},<br \/>\r\nauthor = {Andrea Valenti, Michele Barsotti, Davide Bacciu and Luca Ascari<br \/>\r\n},<br \/>\r\nurl = {https:\/\/www.mdpi.com\/2306-5354\/8\/2\/21, Open Access },<br \/>\r\ndoi = {10.3390\/bioengineering8020021},<br \/>\r\nyear  = {2021},<br \/>\r\ndate = {2021-02-05},<br \/>\r\nurldate = {2021-02-05},<br \/>\r\njournal = {Bioengineering },<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('161','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_161\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"fas fa-globe\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/www.mdpi.com\/2306-5354\/8\/2\/21\" title=\"Open Access \" target=\"_blank\">Open Access <\/a><\/li><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.3390\/bioengineering8020021\" title=\"Follow DOI:10.3390\/bioengineering8020021\" target=\"_blank\">doi:10.3390\/bioengineering8020021<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('161','tp_links')\">Close<\/a><\/p><\/div><\/div><div class=\"tp_pub_image_right\"><img decoding=\"async\" name=\"A Deep Classifier for Upper-Limbs Motor Anticipation Tasks in an Online BCI Setting\" src=\"http:\/\/pages.di.unipi.it\/bacciu\/wp-content\/uploads\/sites\/12\/2024\/01\/bioeng.png\" width=\"80\" alt=\"A Deep Classifier for Upper-Limbs Motor Anticipation Tasks in an Online BCI Setting\" \/><\/div><\/div><div class=\"tp_publication tp_publication_article\"><div class=\"tp_pub_number\">31.<\/div><div class=\"tp_pub_info\"><p class=\"tp_pub_author\"> Bacciu, Davide;  Bertoncini, Gioele;  Morelli, Davide<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('160','tp_links')\" style=\"cursor:pointer;\">Topographic mapping for quality inspection and intelligent filtering of smart-bracelet data<\/a> <span class=\"tp_pub_type article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">Neural Computing Applications, <\/span><span class=\"tp_pub_additional_year\">2021<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_resource_link\"><a id=\"tp_links_sh_160\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('160','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_160\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('160','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_160\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{BacciuNCA2020,<br \/>\r\ntitle = {Topographic mapping for quality inspection and intelligent filtering of smart-bracelet data},<br \/>\r\nauthor = {Davide Bacciu and Gioele Bertoncini and Davide Morelli},<br \/>\r\ndoi = {10.1007\/s00521-020-05600-4},<br \/>\r\nyear  = {2021},<br \/>\r\ndate = {2021-01-04},<br \/>\r\nurldate = {2021-01-04},<br \/>\r\njournal = {Neural Computing Applications},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('160','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_160\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.1007\/s00521-020-05600-4\" title=\"Follow DOI:10.1007\/s00521-020-05600-4\" target=\"_blank\">doi:10.1007\/s00521-020-05600-4<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('160','tp_links')\">Close<\/a><\/p><\/div><\/div><div class=\"tp_pub_image_right\"><img decoding=\"async\" name=\"Topographic mapping for quality inspection and intelligent filtering of smart-bracelet data\" src=\"http:\/\/pages.di.unipi.it\/bacciu\/wp-content\/uploads\/sites\/12\/2024\/01\/nca.jpg\" width=\"80\" alt=\"Topographic mapping for quality inspection and intelligent filtering of smart-bracelet data\" \/><\/div><\/div><div class=\"tp_publication tp_publication_article\"><div class=\"tp_pub_number\">32.<\/div><div class=\"tp_pub_info\"><p class=\"tp_pub_author\"> Crecchi, Francesco;  Melis, Marco;  Sotgiu, Angelo;  Bacciu, Davide;  Biggio, Battista<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('189','tp_links')\" style=\"cursor:pointer;\">FADER: Fast Adversarial Example Rejection<\/a> <span class=\"tp_pub_type article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">Neurocomputing, <\/span><span class=\"tp_pub_additional_year\">2021<\/span>, <span class=\"tp_pub_additional_issn\">ISSN: 0925-2312<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_resource_link\"><a id=\"tp_links_sh_189\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('189','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_189\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('189','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_189\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{CRECCHI2021,<br \/>\r\ntitle = {FADER: Fast Adversarial Example Rejection},<br \/>\r\nauthor = {Francesco Crecchi and Marco Melis and Angelo Sotgiu and Davide Bacciu and Battista Biggio},<br \/>\r\nurl = {https:\/\/arxiv.org\/abs\/2010.09119, Arxiv},<br \/>\r\ndoi = {https:\/\/doi.org\/10.1016\/j.neucom.2021.10.082},<br \/>\r\nissn = {0925-2312},<br \/>\r\nyear  = {2021},<br \/>\r\ndate = {2021-01-01},<br \/>\r\nurldate = {2021-01-01},<br \/>\r\njournal = {Neurocomputing},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('189','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_189\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"ai ai-arxiv\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/arxiv.org\/abs\/2010.09119\" title=\"Arxiv\" target=\"_blank\">Arxiv<\/a><\/li><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/https:\/\/doi.org\/10.1016\/j.neucom.2021.10.082\" title=\"Follow DOI:https:\/\/doi.org\/10.1016\/j.neucom.2021.10.082\" target=\"_blank\">doi:https:\/\/doi.org\/10.1016\/j.neucom.2021.10.082<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('189','tp_links')\">Close<\/a><\/p><\/div><\/div><div class=\"tp_pub_image_right\"><img decoding=\"async\" name=\"FADER: Fast Adversarial Example Rejection\" src=\"http:\/\/pages.di.unipi.it\/bacciu\/wp-content\/uploads\/sites\/12\/2024\/01\/neurocomputing.png\" width=\"80\" alt=\"FADER: Fast Adversarial Example Rejection\" \/><\/div><\/div><h3 class=\"tp_h3\" id=\"tp_h3_2020\">2020<\/h3><div class=\"tp_publication tp_publication_article\"><div class=\"tp_pub_number\">33.<\/div><div class=\"tp_pub_info\"><p class=\"tp_pub_author\"> Bacciu, Davide;  Errica, Federico;  Micheli, Alessio;  Podda, Marco<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('148','tp_links')\" style=\"cursor:pointer;\">A Gentle Introduction to Deep Learning for Graphs<\/a> <span class=\"tp_pub_type article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">Neural Networks, <\/span><span class=\"tp_pub_additional_volume\">vol. 129, <\/span><span class=\"tp_pub_additional_pages\">pp. 203-221, <\/span><span class=\"tp_pub_additional_year\">2020<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_148\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('148','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_148\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('148','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_148\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('148','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_148\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{gentleGraphs2020,<br \/>\r\ntitle = {A Gentle Introduction to Deep Learning for Graphs},<br \/>\r\nauthor = {Davide Bacciu and Federico Errica and Alessio Micheli and Marco Podda},<br \/>\r\nurl = {https:\/\/arxiv.org\/abs\/1912.12693, Arxiv<br \/>\r\nhttps:\/\/doi.org\/10.1016\/j.neunet.2020.06.006, Original Paper},<br \/>\r\ndoi = {10.1016\/j.neunet.2020.06.006},<br \/>\r\nyear  = {2020},<br \/>\r\ndate = {2020-09-01},<br \/>\r\nurldate = {2020-09-01},<br \/>\r\njournal = {Neural Networks},<br \/>\r\nvolume = {129},<br \/>\r\npages = {203-221},<br \/>\r\npublisher = {Elsevier},<br \/>\r\nabstract = {The adaptive processing of graph data is a long-standing research topic which has been lately consolidated as a theme of major interest in the deep learning community. The snap increase in the amount and breadth of related research has come at the price of little systematization of knowledge and attention to earlier literature. This work is designed as a tutorial introduction to the field of deep learning for graphs. It favours a consistent and progressive introduction of the main concepts and architectural aspects over an exposition of the most recent literature, for which the reader is referred to available surveys. The paper takes a top-down view to the problem, introducing a generalized formulation of graph representation learning based on a local and iterative approach to structured information processing. It introduces the basic building blocks that can be combined to design novel and effective neural models for graphs. The methodological exposition is complemented by a discussion of interesting research challenges and applications in the field. },<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('148','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_148\" style=\"display:none;\"><div class=\"tp_abstract_entry\">The adaptive processing of graph data is a long-standing research topic which has been lately consolidated as a theme of major interest in the deep learning community. The snap increase in the amount and breadth of related research has come at the price of little systematization of knowledge and attention to earlier literature. This work is designed as a tutorial introduction to the field of deep learning for graphs. It favours a consistent and progressive introduction of the main concepts and architectural aspects over an exposition of the most recent literature, for which the reader is referred to available surveys. The paper takes a top-down view to the problem, introducing a generalized formulation of graph representation learning based on a local and iterative approach to structured information processing. It introduces the basic building blocks that can be combined to design novel and effective neural models for graphs. The methodological exposition is complemented by a discussion of interesting research challenges and applications in the field. <\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('148','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_148\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"ai ai-arxiv\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/arxiv.org\/abs\/1912.12693\" title=\"Arxiv\" target=\"_blank\">Arxiv<\/a><\/li><li><i class=\"fas fa-globe\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/doi.org\/10.1016\/j.neunet.2020.06.006\" title=\"Original Paper\" target=\"_blank\">Original Paper<\/a><\/li><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.1016\/j.neunet.2020.06.006\" title=\"Follow DOI:10.1016\/j.neunet.2020.06.006\" target=\"_blank\">doi:10.1016\/j.neunet.2020.06.006<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('148','tp_links')\">Close<\/a><\/p><\/div><\/div><div class=\"tp_pub_image_right\"><img decoding=\"async\" name=\"A Gentle Introduction to Deep Learning for Graphs\" src=\"http:\/\/pages.di.unipi.it\/bacciu\/wp-content\/uploads\/sites\/12\/2020\/06\/08936080.jpg\" width=\"80\" alt=\"A Gentle Introduction to Deep Learning for Graphs\" \/><\/div><\/div><div class=\"tp_publication tp_publication_article\"><div class=\"tp_pub_number\">34.<\/div><div class=\"tp_pub_info\"><p class=\"tp_pub_author\"> Bacciu, Davide;  Errica, Federico;  Micheli, Alessio<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('153','tp_links')\" style=\"cursor:pointer;\">Probabilistic Learning on Graphs via Contextual Architectures<\/a> <span class=\"tp_pub_type article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">Journal of Machine Learning Research, <\/span><span class=\"tp_pub_additional_volume\">vol. 21, <\/span><span class=\"tp_pub_additional_number\">no. 134, <\/span><span class=\"tp_pub_additional_pages\">pp. 1\u221239, <\/span><span class=\"tp_pub_additional_year\">2020<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_153\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('153','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_153\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('153','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_153\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('153','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_153\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{jmlrCGMM20,<br \/>\r\ntitle = {Probabilistic Learning on Graphs via Contextual Architectures},<br \/>\r\nauthor = {Davide Bacciu and Federico Errica and Alessio Micheli},<br \/>\r\neditor = {Pushmeet Kohli},<br \/>\r\nurl = {http:\/\/jmlr.org\/papers\/v21\/19-470.html, Paper},<br \/>\r\nyear  = {2020},<br \/>\r\ndate = {2020-07-27},<br \/>\r\nurldate = {2020-07-27},<br \/>\r\njournal = {Journal of Machine Learning Research},<br \/>\r\nvolume = {21},<br \/>\r\nnumber = {134},<br \/>\r\npages = {1\u221239},<br \/>\r\nabstract = {We propose a novel methodology for representation learning on graph-structured data, in which a stack of Bayesian Networks learns different distributions of a vertex's neighborhood. Through an incremental construction policy and layer-wise training, we can build deeper architectures with respect to typical graph convolutional neural networks, with benefits in terms of context spreading between vertices. <br \/>\r\nFirst, the model learns from graphs via maximum likelihood estimation without using target labels.<br \/>\r\nThen, a supervised readout is applied to the learned graph embeddings to deal with graph classification and vertex classification tasks, showing competitive results against neural models for graphs. The computational complexity is linear in the number of edges, facilitating learning on large scale data sets. By studying how depth affects the performances of our model, we discover that a broader context generally improves performances. In turn, this leads to a critical analysis of some benchmarks used in literature.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('153','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_153\" style=\"display:none;\"><div class=\"tp_abstract_entry\">We propose a novel methodology for representation learning on graph-structured data, in which a stack of Bayesian Networks learns different distributions of a vertex's neighborhood. Through an incremental construction policy and layer-wise training, we can build deeper architectures with respect to typical graph convolutional neural networks, with benefits in terms of context spreading between vertices. <br \/>\r\nFirst, the model learns from graphs via maximum likelihood estimation without using target labels.<br \/>\r\nThen, a supervised readout is applied to the learned graph embeddings to deal with graph classification and vertex classification tasks, showing competitive results against neural models for graphs. The computational complexity is linear in the number of edges, facilitating learning on large scale data sets. By studying how depth affects the performances of our model, we discover that a broader context generally improves performances. In turn, this leads to a critical analysis of some benchmarks used in literature.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('153','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_153\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"fas fa-globe\"><\/i><a class=\"tp_pub_list\" href=\"http:\/\/jmlr.org\/papers\/v21\/19-470.html\" title=\"Paper\" target=\"_blank\">Paper<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('153','tp_links')\">Close<\/a><\/p><\/div><\/div><div class=\"tp_pub_image_right\"><img decoding=\"async\" name=\"Probabilistic Learning on Graphs via Contextual Architectures\" src=\"http:\/\/pages.di.unipi.it\/bacciu\/wp-content\/uploads\/sites\/12\/2020\/07\/jmlr.jpg\" width=\"80\" alt=\"Probabilistic Learning on Graphs via Contextual Architectures\" \/><\/div><\/div><div class=\"tp_publication tp_publication_article\"><div class=\"tp_pub_number\">35.<\/div><div class=\"tp_pub_info\"><p class=\"tp_pub_author\"> Ferrari, Elisa;  Retico, Alessandra;  Bacciu, Davide<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('141','tp_links')\" style=\"cursor:pointer;\">Measuring the effects of confounders in medical supervised classification problems: the Confounding Index (CI)<\/a> <span class=\"tp_pub_type article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">Artificial Intelligence in Medicine, <\/span><span class=\"tp_pub_additional_volume\">vol. 103, <\/span><span class=\"tp_pub_additional_year\">2020<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_141\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('141','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_141\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('141','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_141\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('141','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_141\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{aime20Confound,<br \/>\r\ntitle = {Measuring the effects of confounders in medical supervised classification problems: the Confounding Index (CI)},<br \/>\r\nauthor = {Elisa Ferrari and Alessandra Retico and Davide Bacciu},<br \/>\r\nurl = {https:\/\/arxiv.org\/abs\/1905.08871},<br \/>\r\ndoi = {10.1016\/j.artmed.2020.101804},<br \/>\r\nyear  = {2020},<br \/>\r\ndate = {2020-03-01},<br \/>\r\njournal = {Artificial Intelligence in Medicine},<br \/>\r\nvolume = {103},<br \/>\r\nabstract = {Over the years, there has been growing interest in using Machine Learning techniques for biomedical data processing. When tackling these tasks, one needs to bear in mind that biomedical data depends on a variety of characteristics, such as demographic aspects (age, gender, etc) or the acquisition technology, which might be unrelated with the target of the analysis. In supervised tasks, failing to match the ground truth targets with respect to such characteristics, called confounders, may lead to very misleading estimates of the predictive performance. Many strategies have been proposed to handle confounders, ranging from data selection, to normalization techniques, up to the use of training algorithm for learning with imbalanced data. However, all these solutions require the confounders to be known a priori. To this aim, we introduce a novel index that is able to measure the confounding effect of a data attribute in a bias-agnostic way. This index can be used to quantitatively compare the confounding effects of different variables and to inform correction methods such as normalization procedures or ad-hoc-prepared learning algorithms. The effectiveness of this index is validated on both simulated data and real-world neuroimaging data. },<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('141','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_141\" style=\"display:none;\"><div class=\"tp_abstract_entry\">Over the years, there has been growing interest in using Machine Learning techniques for biomedical data processing. When tackling these tasks, one needs to bear in mind that biomedical data depends on a variety of characteristics, such as demographic aspects (age, gender, etc) or the acquisition technology, which might be unrelated with the target of the analysis. In supervised tasks, failing to match the ground truth targets with respect to such characteristics, called confounders, may lead to very misleading estimates of the predictive performance. Many strategies have been proposed to handle confounders, ranging from data selection, to normalization techniques, up to the use of training algorithm for learning with imbalanced data. However, all these solutions require the confounders to be known a priori. To this aim, we introduce a novel index that is able to measure the confounding effect of a data attribute in a bias-agnostic way. This index can be used to quantitatively compare the confounding effects of different variables and to inform correction methods such as normalization procedures or ad-hoc-prepared learning algorithms. The effectiveness of this index is validated on both simulated data and real-world neuroimaging data. <\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('141','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_141\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"ai ai-arxiv\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/arxiv.org\/abs\/1905.08871\" title=\"https:\/\/arxiv.org\/abs\/1905.08871\" target=\"_blank\">https:\/\/arxiv.org\/abs\/1905.08871<\/a><\/li><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.1016\/j.artmed.2020.101804\" title=\"Follow DOI:10.1016\/j.artmed.2020.101804\" target=\"_blank\">doi:10.1016\/j.artmed.2020.101804<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('141','tp_links')\">Close<\/a><\/p><\/div><\/div><div class=\"tp_pub_image_right\"><img decoding=\"async\" name=\"Measuring the effects of confounders in medical supervised classification problems: the Confounding Index (CI)\" src=\"http:\/\/pages.di.unipi.it\/bacciu\/wp-content\/uploads\/sites\/12\/2020\/01\/aime.jpg\" width=\"80\" alt=\"Measuring the effects of confounders in medical supervised classification problems: the Confounding Index (CI)\" \/><\/div><\/div><h3 class=\"tp_h3\" id=\"tp_h3_2019\">2019<\/h3><div class=\"tp_publication tp_publication_article\"><div class=\"tp_pub_number\">36.<\/div><div class=\"tp_pub_info\"><p class=\"tp_pub_author\"> Bacciu, Davide;  Micheli, Alessio;  Podda, Marco<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('137','tp_links')\" style=\"cursor:pointer;\">Edge-based sequential graph generation with recurrent neural networks<\/a> <span class=\"tp_pub_type article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">Neurocomputing, <\/span><span class=\"tp_pub_additional_year\">2019<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_137\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('137','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_137\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('137','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_137\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('137','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_137\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{neucompEsann19,<br \/>\r\ntitle = {Edge-based sequential graph generation with recurrent neural networks},<br \/>\r\nauthor = {Davide Bacciu and Alessio Micheli and Marco Podda},<br \/>\r\nurl = {https:\/\/arxiv.org\/abs\/2002.00102v1},<br \/>\r\nyear  = {2019},<br \/>\r\ndate = {2019-12-31},<br \/>\r\njournal = {Neurocomputing},<br \/>\r\nabstract = {     Graph generation with Machine Learning is an open problem with applications in various research fields. In this work, we propose to cast the generative process of a graph into a sequential one, relying on a node ordering procedure. We use this sequential process to design a novel generative model composed of two recurrent neural networks that learn to predict the edges of graphs: the first network generates one endpoint of each edge, while the second network generates the other endpoint conditioned on the state of the first. We test our approach extensively on five different datasets, comparing with two well-known baselines coming from graph literature, and two recurrent approaches, one of which holds state of the art performances. Evaluation is conducted considering quantitative and qualitative characteristics of the generated samples. Results show that our approach is able to yield novel, and unique graphs originating from very different distributions, while retaining structural properties very similar to those in the training sample. Under the proposed evaluation framework, our approach is able to reach performances comparable to the current state of the art on the graph generation task. },<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('137','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_137\" style=\"display:none;\"><div class=\"tp_abstract_entry\">     Graph generation with Machine Learning is an open problem with applications in various research fields. In this work, we propose to cast the generative process of a graph into a sequential one, relying on a node ordering procedure. We use this sequential process to design a novel generative model composed of two recurrent neural networks that learn to predict the edges of graphs: the first network generates one endpoint of each edge, while the second network generates the other endpoint conditioned on the state of the first. We test our approach extensively on five different datasets, comparing with two well-known baselines coming from graph literature, and two recurrent approaches, one of which holds state of the art performances. Evaluation is conducted considering quantitative and qualitative characteristics of the generated samples. Results show that our approach is able to yield novel, and unique graphs originating from very different distributions, while retaining structural properties very similar to those in the training sample. Under the proposed evaluation framework, our approach is able to reach performances comparable to the current state of the art on the graph generation task. <\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('137','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_137\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"ai ai-arxiv\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/arxiv.org\/abs\/2002.00102v1\" title=\"https:\/\/arxiv.org\/abs\/2002.00102v1\" target=\"_blank\">https:\/\/arxiv.org\/abs\/2002.00102v1<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('137','tp_links')\">Close<\/a><\/p><\/div><\/div><div class=\"tp_pub_image_right\"><img decoding=\"async\" name=\"Edge-based sequential graph generation with recurrent neural networks\" src=\"https:\/\/secure-ecsd.elsevier.com\/covers\/80\/Tango2\/large\/09252312.jpg\" width=\"80\" alt=\"Edge-based sequential graph generation with recurrent neural networks\" \/><\/div><\/div><div class=\"tp_publication tp_publication_article\"><div class=\"tp_pub_number\">37.<\/div><div class=\"tp_pub_info\"><p class=\"tp_pub_author\"> Davide, Bacciu;  Maurizio, Di Rocco;  Mauro, Dragone;  Claudio, Gallicchio;  Alessio, Micheli;  Alessandro, Saffiotti<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('132','tp_links')\" style=\"cursor:pointer;\">An Ambient Intelligence Approach for Learning in Smart Robotic Environments<\/a> <span class=\"tp_pub_type article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">Computational Intelligence, <\/span><span class=\"tp_pub_additional_year\">2019<\/span><span class=\"tp_pub_additional_note\">, (Early View (Online Version of Record before inclusion in an issue)\r\n)<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_132\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('132','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_132\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('132','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_132\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('132','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_132\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{rubicon2019CI,<br \/>\r\ntitle = {An Ambient Intelligence Approach for Learning in Smart Robotic Environments},<br \/>\r\nauthor = {Bacciu Davide and Di Rocco Maurizio and Dragone Mauro and Gallicchio Claudio and Micheli Alessio and Saffiotti Alessandro},<br \/>\r\ndoi = {10.1111\/coin.12233},<br \/>\r\nyear  = {2019},<br \/>\r\ndate = {2019-07-31},<br \/>\r\njournal = {Computational Intelligence},<br \/>\r\nabstract = {Smart robotic environments combine traditional (ambient) sensing devices and mobile robots. This combination extends the type of applications that can be considered, reduces their complexity, and enhances the individual values of the devices involved by enabling new services that cannot be performed by a single device. In order to reduce the amount of preparation and pre-programming required for their deployment in real world applications,  it is important to make these systems self-learning, self-configuring, and self-adapting. The solution presented in this paper is based upon a type of compositional adaptation where (possibly multiple) plans of actions are created through planning and involve the activation of pre-existing capabilities. All the devices in the smart environment  participate in a pervasive learning infrastructure, which is exploited to recognize which plans of actions are most suited to the current situation. The system is evaluated in experiments run in a real domestic environment, showing its ability to pro-actively and smoothly adapt to subtle changes in the environment and in the habits and preferences<br \/>\r\nof their user(s).},<br \/>\r\nnote = {Early View (Online Version of Record before inclusion in an issue)<br \/>\r\n},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('132','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_132\" style=\"display:none;\"><div class=\"tp_abstract_entry\">Smart robotic environments combine traditional (ambient) sensing devices and mobile robots. This combination extends the type of applications that can be considered, reduces their complexity, and enhances the individual values of the devices involved by enabling new services that cannot be performed by a single device. In order to reduce the amount of preparation and pre-programming required for their deployment in real world applications,  it is important to make these systems self-learning, self-configuring, and self-adapting. The solution presented in this paper is based upon a type of compositional adaptation where (possibly multiple) plans of actions are created through planning and involve the activation of pre-existing capabilities. All the devices in the smart environment  participate in a pervasive learning infrastructure, which is exploited to recognize which plans of actions are most suited to the current situation. The system is evaluated in experiments run in a real domestic environment, showing its ability to pro-actively and smoothly adapt to subtle changes in the environment and in the habits and preferences<br \/>\r\nof their user(s).<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('132','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_132\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.1111\/coin.12233\" title=\"Follow DOI:10.1111\/coin.12233\" target=\"_blank\">doi:10.1111\/coin.12233<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('132','tp_links')\">Close<\/a><\/p><\/div><\/div><div class=\"tp_pub_image_right\"><\/div><\/div><div class=\"tp_publication tp_publication_article\"><div class=\"tp_pub_number\">38.<\/div><div class=\"tp_pub_info\"><p class=\"tp_pub_author\"> Davide, Bacciu;  Daniele, Castellana<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('123','tp_links')\" style=\"cursor:pointer;\">Bayesian Mixtures of Hidden Tree Markov Models for Structured Data Clustering<\/a> <span class=\"tp_pub_type article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">Neurocomputing, <\/span><span class=\"tp_pub_additional_volume\">vol. 342, <\/span><span class=\"tp_pub_additional_pages\">pp. 49-59, <\/span><span class=\"tp_pub_additional_year\">2019<\/span>, <span class=\"tp_pub_additional_isbn\">ISBN: 0925-2312<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_123\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('123','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_123\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('123','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_123\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('123','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_123\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{neucomBayesHTMM,<br \/>\r\ntitle = {Bayesian Mixtures of Hidden Tree Markov Models for Structured Data Clustering},<br \/>\r\nauthor = {Bacciu Davide and Castellana Daniele},<br \/>\r\nurl = {https:\/\/doi.org\/10.1016\/j.neucom.2018.11.091},<br \/>\r\ndoi = {10.1016\/j.neucom.2018.11.091},<br \/>\r\nisbn = {0925-2312},<br \/>\r\nyear  = {2019},<br \/>\r\ndate = {2019-05-21},<br \/>\r\njournal = {Neurocomputing},<br \/>\r\nvolume = {342},<br \/>\r\npages = {49-59},<br \/>\r\nabstract = {The paper deals with the problem of unsupervised learning with structured data, proposing a mixture model approach to cluster tree samples. First, we discuss how to use the Switching-Parent Hidden Tree Markov Model, a compositional model for learning tree distributions, to define a finite mixture model where the number of components is fixed by a hyperparameter. Then, we show how to relax such an assumption by introducing a Bayesian non-parametric mixture model where the number of necessary hidden tree components is learned from data. Experimental validation on synthetic and real datasets show the benefit of mixture models over simple hidden tree models in clustering applications. Further, we provide a characterization of the behaviour of the two mixture models for different choices of their hyperparameters.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('123','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_123\" style=\"display:none;\"><div class=\"tp_abstract_entry\">The paper deals with the problem of unsupervised learning with structured data, proposing a mixture model approach to cluster tree samples. First, we discuss how to use the Switching-Parent Hidden Tree Markov Model, a compositional model for learning tree distributions, to define a finite mixture model where the number of components is fixed by a hyperparameter. Then, we show how to relax such an assumption by introducing a Bayesian non-parametric mixture model where the number of necessary hidden tree components is learned from data. Experimental validation on synthetic and real datasets show the benefit of mixture models over simple hidden tree models in clustering applications. Further, we provide a characterization of the behaviour of the two mixture models for different choices of their hyperparameters.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('123','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_123\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"fas fa-globe\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/doi.org\/10.1016\/j.neucom.2018.11.091\" title=\"https:\/\/doi.org\/10.1016\/j.neucom.2018.11.091\" target=\"_blank\">https:\/\/doi.org\/10.1016\/j.neucom.2018.11.091<\/a><\/li><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.1016\/j.neucom.2018.11.091\" title=\"Follow DOI:10.1016\/j.neucom.2018.11.091\" target=\"_blank\">doi:10.1016\/j.neucom.2018.11.091<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('123','tp_links')\">Close<\/a><\/p><\/div><\/div><div class=\"tp_pub_image_right\"><img decoding=\"async\" name=\"Bayesian Mixtures of Hidden Tree Markov Models for Structured Data Clustering\" src=\"https:\/\/secure-ecsd.elsevier.com\/covers\/80\/Tango2\/large\/09252312.jpg\" width=\"80\" alt=\"Bayesian Mixtures of Hidden Tree Markov Models for Structured Data Clustering\" \/><\/div><\/div><div class=\"tp_publication tp_publication_article\"><div class=\"tp_pub_number\">39.<\/div><div class=\"tp_pub_info\"><p class=\"tp_pub_author\"> Bacciu, Davide;  Crecchi, Francesco<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('130','tp_links')\" style=\"cursor:pointer;\">Augmenting Recurrent Neural Networks Resilience by Dropout<\/a> <span class=\"tp_pub_type article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">IEEE Transactions on Neural Networs and Learning Systems, <\/span><span class=\"tp_pub_additional_year\">2019<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_130\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('130','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_130\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('130','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_130\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('130','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_130\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{tnnnls_dropin2019,<br \/>\r\ntitle = {Augmenting Recurrent Neural Networks Resilience by Dropout},<br \/>\r\nauthor = {Davide Bacciu and Francesco Crecchi },<br \/>\r\ndoi = {10.1109\/TNNLS.2019.2899744},<br \/>\r\nyear  = {2019},<br \/>\r\ndate = {2019-03-31},<br \/>\r\nurldate = {2019-03-31},<br \/>\r\njournal = {IEEE Transactions on Neural Networs and Learning Systems},<br \/>\r\nabstract = {The paper discusses the simple idea that dropout regularization can be used to efficiently induce resiliency to missing inputs at prediction time in a generic neural network.  We show how the approach can be effective on tasks where imputation strategies often fail, namely involving recurrent neural networks and scenarios where whole sequences of input observations are missing. The experimental analysis provides an assessment of the accuracy-resiliency tradeoff in multiple recurrent models, including reservoir computing methods, and comprising real-world ambient intelligence and biomedical time series.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('130','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_130\" style=\"display:none;\"><div class=\"tp_abstract_entry\">The paper discusses the simple idea that dropout regularization can be used to efficiently induce resiliency to missing inputs at prediction time in a generic neural network.  We show how the approach can be effective on tasks where imputation strategies often fail, namely involving recurrent neural networks and scenarios where whole sequences of input observations are missing. The experimental analysis provides an assessment of the accuracy-resiliency tradeoff in multiple recurrent models, including reservoir computing methods, and comprising real-world ambient intelligence and biomedical time series.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('130','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_130\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.1109\/TNNLS.2019.2899744\" title=\"Follow DOI:10.1109\/TNNLS.2019.2899744\" target=\"_blank\">doi:10.1109\/TNNLS.2019.2899744<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('130','tp_links')\">Close<\/a><\/p><\/div><\/div><div class=\"tp_pub_image_right\"><img decoding=\"async\" name=\"Augmenting Recurrent Neural Networks Resilience by Dropout\" src=\"http:\/\/pages.di.unipi.it\/bacciu\/wp-content\/uploads\/sites\/12\/2024\/01\/tnnls.jpg\" width=\"80\" alt=\"Augmenting Recurrent Neural Networks Resilience by Dropout\" \/><\/div><\/div><div class=\"tp_publication tp_publication_article\"><div class=\"tp_pub_number\">40.<\/div><div class=\"tp_pub_info\"><p class=\"tp_pub_author\"> Cosimo, Della Santina;  Visar, Arapi;  Giuseppe, Averta;  Francesca, Damiani;  Gaia, Fiore;  Alessandro, Settimi;  Giuseppe, Catalano Manuel;  Davide, Bacciu;  Antonio, Bicchi;  Matteo, Bianchi<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('126','tp_links')\" style=\"cursor:pointer;\">Learning from humans how to grasp: a data-driven architecture for autonomous grasping with anthropomorphic soft hands<\/a> <span class=\"tp_pub_type article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">IEEE Robotics and Automation Letters, <\/span><span class=\"tp_pub_additional_pages\">pp. 1-8, <\/span><span class=\"tp_pub_additional_year\">2019<\/span>, <span class=\"tp_pub_additional_issn\">ISSN: 2377-3766<\/span><span class=\"tp_pub_additional_note\">, (Also accepted for presentation at ICRA 2019)<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_resource_link\"><a id=\"tp_links_sh_126\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('126','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_126\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('126','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_126\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{ral2019,<br \/>\r\ntitle = {Learning from humans how to grasp: a data-driven architecture for autonomous grasping with anthropomorphic soft hands},<br \/>\r\nauthor = {Della Santina Cosimo and Arapi Visar and Averta Giuseppe and Damiani Francesca and Fiore Gaia and Settimi Alessandro and Catalano Manuel Giuseppe and Bacciu Davide and Bicchi Antonio and Bianchi Matteo},<br \/>\r\nurl = {https:\/\/ieeexplore.ieee.org\/document\/8629968},<br \/>\r\ndoi = {10.1109\/LRA.2019.2896485},<br \/>\r\nissn = {2377-3766},<br \/>\r\nyear  = {2019},<br \/>\r\ndate = {2019-02-01},<br \/>\r\njournal = {IEEE Robotics and Automation Letters},<br \/>\r\npages = {1-8},<br \/>\r\nnote = {Also accepted for presentation at ICRA 2019},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('126','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_126\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"fas fa-globe\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/ieeexplore.ieee.org\/document\/8629968\" title=\"https:\/\/ieeexplore.ieee.org\/document\/8629968\" target=\"_blank\">https:\/\/ieeexplore.ieee.org\/document\/8629968<\/a><\/li><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.1109\/LRA.2019.2896485\" title=\"Follow DOI:10.1109\/LRA.2019.2896485\" target=\"_blank\">doi:10.1109\/LRA.2019.2896485<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('126','tp_links')\">Close<\/a><\/p><\/div><\/div><div class=\"tp_pub_image_right\"><img decoding=\"async\" name=\"Learning from humans how to grasp: a data-driven architecture for autonomous grasping with anthropomorphic soft hands\" src=\"https:\/\/team.inria.fr\/rainbow\/files\/2019\/05\/RAL2018.png\" width=\"80\" alt=\"Learning from humans how to grasp: a data-driven architecture for autonomous grasping with anthropomorphic soft hands\" \/><\/div><\/div><h3 class=\"tp_h3\" id=\"tp_h3_2018\">2018<\/h3><div class=\"tp_publication tp_publication_article\"><div class=\"tp_pub_number\">41.<\/div><div class=\"tp_pub_info\"><p class=\"tp_pub_author\"> Arapi, Visar;  Santina, Cosimo Della;  Bacciu, Davide;  Bianchi, Matteo;  Bicchi, Antonio<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('124','tp_links')\" style=\"cursor:pointer;\">DeepDynamicHand: A deep neural architecture for labeling hand manipulation strategies in video sources exploiting temporal information <\/a> <span class=\"tp_pub_type article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">Frontiers in Neurorobotics, <\/span><span class=\"tp_pub_additional_volume\">vol. 12, <\/span><span class=\"tp_pub_additional_pages\">pp. 86, <\/span><span class=\"tp_pub_additional_year\">2018<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_124\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('124','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_124\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('124','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_124\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('124','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_124\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{frontNeurob18,<br \/>\r\ntitle = {DeepDynamicHand: A deep neural architecture for labeling hand manipulation strategies in video sources exploiting temporal information },<br \/>\r\nauthor = {Visar Arapi and Cosimo Della Santina and Davide Bacciu and Matteo Bianchi and Antonio Bicchi},<br \/>\r\nurl = {https:\/\/www.frontiersin.org\/articles\/10.3389\/fnbot.2018.00086\/full},<br \/>\r\ndoi = {10.3389\/fnbot.2018.00086},<br \/>\r\nyear  = {2018},<br \/>\r\ndate = {2018-12-17},<br \/>\r\nurldate = {2018-12-17},<br \/>\r\njournal = {Frontiers in Neurorobotics},<br \/>\r\nvolume = {12},<br \/>\r\npages = {86},<br \/>\r\nabstract = {Humans are capable of complex manipulation interactions with the environment, relying on the intrinsic adaptability and compliance of their hands. Recently, soft robotic manipulation has attempted to reproduce such an extraordinary behavior, through the design of deformable yet robust end-effectors. To this goal, the investigation of human behavior has become crucial to correctly inform technological developments of robotic hands that can successfully exploit environmental constraint as humans actually do. Among the different tools robotics can leverage on to achieve this objective, deep learning has emerged as a promising approach for the study and then the implementation of neuro-scientific observations on the artificial side. However, current approaches tend to neglect the dynamic nature of hand pose recognition problems, limiting the effectiveness of these techniques in identifying sequences of manipulation primitives underpinning action generation, e.g. during purposeful interaction with the environment. In this work, we propose a vision-based supervised Hand Pose Recognition method which, for the first time, takes into account temporal information to identify meaningful sequences of actions in grasping and manipulation tasks . More specifically, we apply Deep Neural Networks to automatically learn features from hand posture images that consist of frames extracted from grasping and manipulation task videos with objects and external environmental constraints. For training purposes, videos are divided into intervals, each associated to a specific action by a human supervisor. The proposed algorithm combines a Convolutional Neural Network to detect the hand within each video frame and a Recurrent Neural Network to predict the hand action in the current frame, while taking into consideration the history of actions performed in the previous frames. Experimental validation has been performed on two datasets of dynamic hand-centric strategies, where subjects regularly interact with objects and environment. Proposed architecture achieved a very good classification accuracy on both datasets, reaching performance up to 94%, and outperforming state of the art techniques. The outcomes of this study can be successfully applied to robotics, e.g for planning and control of soft anthropomorphic manipulators. },<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('124','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_124\" style=\"display:none;\"><div class=\"tp_abstract_entry\">Humans are capable of complex manipulation interactions with the environment, relying on the intrinsic adaptability and compliance of their hands. Recently, soft robotic manipulation has attempted to reproduce such an extraordinary behavior, through the design of deformable yet robust end-effectors. To this goal, the investigation of human behavior has become crucial to correctly inform technological developments of robotic hands that can successfully exploit environmental constraint as humans actually do. Among the different tools robotics can leverage on to achieve this objective, deep learning has emerged as a promising approach for the study and then the implementation of neuro-scientific observations on the artificial side. However, current approaches tend to neglect the dynamic nature of hand pose recognition problems, limiting the effectiveness of these techniques in identifying sequences of manipulation primitives underpinning action generation, e.g. during purposeful interaction with the environment. In this work, we propose a vision-based supervised Hand Pose Recognition method which, for the first time, takes into account temporal information to identify meaningful sequences of actions in grasping and manipulation tasks . More specifically, we apply Deep Neural Networks to automatically learn features from hand posture images that consist of frames extracted from grasping and manipulation task videos with objects and external environmental constraints. For training purposes, videos are divided into intervals, each associated to a specific action by a human supervisor. The proposed algorithm combines a Convolutional Neural Network to detect the hand within each video frame and a Recurrent Neural Network to predict the hand action in the current frame, while taking into consideration the history of actions performed in the previous frames. Experimental validation has been performed on two datasets of dynamic hand-centric strategies, where subjects regularly interact with objects and environment. Proposed architecture achieved a very good classification accuracy on both datasets, reaching performance up to 94%, and outperforming state of the art techniques. The outcomes of this study can be successfully applied to robotics, e.g for planning and control of soft anthropomorphic manipulators. <\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('124','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_124\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"fas fa-globe\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/www.frontiersin.org\/articles\/10.3389\/fnbot.2018.00086\/full\" title=\"https:\/\/www.frontiersin.org\/articles\/10.3389\/fnbot.2018.00086\/full\" target=\"_blank\">https:\/\/www.frontiersin.org\/articles\/10.3389\/fnbot.2018.00086\/full<\/a><\/li><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.3389\/fnbot.2018.00086\" title=\"Follow DOI:10.3389\/fnbot.2018.00086\" target=\"_blank\">doi:10.3389\/fnbot.2018.00086<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('124','tp_links')\">Close<\/a><\/p><\/div><\/div><div class=\"tp_pub_image_right\"><img decoding=\"async\" name=\"DeepDynamicHand: A deep neural architecture for labeling hand manipulation strategies in video sources exploiting temporal information \" src=\"http:\/\/pages.di.unipi.it\/bacciu\/wp-content\/uploads\/sites\/12\/2024\/01\/frobotics.jpg\" width=\"80\" alt=\"DeepDynamicHand: A deep neural architecture for labeling hand manipulation strategies in video sources exploiting temporal information \" \/><\/div><\/div><div class=\"tp_publication tp_publication_article\"><div class=\"tp_pub_number\">42.<\/div><div class=\"tp_pub_info\"><p class=\"tp_pub_author\"> Marco, Podda;  Davide, Bacciu;  Alessio, Micheli;  Roberto, Bellu;  Giulia, Placidi;  Luigi, Gagliardi<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('120','tp_links')\" style=\"cursor:pointer;\">A machine learning approach to estimating preterm infants survival: development of the Preterm Infants Survival Assessment (PISA) predictor<\/a> <span class=\"tp_pub_type article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">Nature Scientific Reports, <\/span><span class=\"tp_pub_additional_volume\">vol. 8, <\/span><span class=\"tp_pub_additional_year\">2018<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_120\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('120','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_120\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('120','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_120\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('120','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_120\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{naturescirep2018,<br \/>\r\ntitle = {A machine learning approach to estimating preterm infants survival: development of the Preterm Infants Survival Assessment (PISA) predictor},<br \/>\r\nauthor = {Podda Marco and Bacciu Davide and Micheli Alessio and Bellu Roberto and Placidi Giulia and Gagliardi Luigi },<br \/>\r\nurl = {https:\/\/doi.org\/10.1038\/s41598-018-31920-6},<br \/>\r\ndoi = {10.1038\/s41598-018-31920-6},<br \/>\r\nyear  = {2018},<br \/>\r\ndate = {2018-09-13},<br \/>\r\nurldate = {2018-09-13},<br \/>\r\njournal = {Nature Scientific Reports},<br \/>\r\nvolume = {8},<br \/>\r\nabstract = {Estimation of mortality risk of very preterm neonates is carried out in clinical and research settings. We aimed at elaborating a prediction tool using machine learning methods. We developed models on a cohort of 23747 neonates &lt;30 weeks gestational age, or &lt;1501 g birth weight, enrolled in the Italian Neonatal Network in 2008\u20132014 (development set), using 12 easily collected perinatal variables. We used a cohort from 2015\u20132016 (N\u2009=\u20095810) as a test set. Among several machine learning methods we chose artificial Neural Networks (NN). The resulting predictor was compared with logistic regression models. In the test cohort, NN had a slightly better discrimination than logistic regression (P\u2009&lt;\u20090.002). The differences were greater in subgroups of neonates (at various gestational age or birth weight intervals, singletons). Using a cutoff of death probability of 0.5, logistic regression misclassified 67\/5810 neonates (1.2 percent) more than NN. In conclusion our study \u2013 the largest published so far \u2013 shows that even in this very simplified scenario, using only limited information available up to 5 minutes after birth, a NN approach had a small but significant advantage over current approaches. The software implementing the predictor is made freely available to the community.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('120','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_120\" style=\"display:none;\"><div class=\"tp_abstract_entry\">Estimation of mortality risk of very preterm neonates is carried out in clinical and research settings. We aimed at elaborating a prediction tool using machine learning methods. We developed models on a cohort of 23747 neonates &lt;30 weeks gestational age, or &lt;1501 g birth weight, enrolled in the Italian Neonatal Network in 2008\u20132014 (development set), using 12 easily collected perinatal variables. We used a cohort from 2015\u20132016 (N\u2009=\u20095810) as a test set. Among several machine learning methods we chose artificial Neural Networks (NN). The resulting predictor was compared with logistic regression models. In the test cohort, NN had a slightly better discrimination than logistic regression (P\u2009&lt;\u20090.002). The differences were greater in subgroups of neonates (at various gestational age or birth weight intervals, singletons). Using a cutoff of death probability of 0.5, logistic regression misclassified 67\/5810 neonates (1.2 percent) more than NN. In conclusion our study \u2013 the largest published so far \u2013 shows that even in this very simplified scenario, using only limited information available up to 5 minutes after birth, a NN approach had a small but significant advantage over current approaches. The software implementing the predictor is made freely available to the community.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('120','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_120\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"fas fa-globe\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/doi.org\/10.1038\/s41598-018-31920-6\" title=\"https:\/\/doi.org\/10.1038\/s41598-018-31920-6\" target=\"_blank\">https:\/\/doi.org\/10.1038\/s41598-018-31920-6<\/a><\/li><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.1038\/s41598-018-31920-6\" title=\"Follow DOI:10.1038\/s41598-018-31920-6\" target=\"_blank\">doi:10.1038\/s41598-018-31920-6<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('120','tp_links')\">Close<\/a><\/p><\/div><\/div><div class=\"tp_pub_image_right\"><img decoding=\"async\" name=\"A machine learning approach to estimating preterm infants survival: development of the Preterm Infants Survival Assessment (PISA) predictor\" src=\"http:\/\/pages.di.unipi.it\/bacciu\/wp-content\/uploads\/sites\/12\/2024\/01\/scirep.png\" width=\"80\" alt=\"A machine learning approach to estimating preterm infants survival: development of the Preterm Infants Survival Assessment (PISA) predictor\" \/><\/div><\/div><div class=\"tp_publication tp_publication_article\"><div class=\"tp_pub_number\">43.<\/div><div class=\"tp_pub_info\"><p class=\"tp_pub_author\"> Davide, Bacciu;  Michele, Colombo;  Davide, Morelli;  David, Plans<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('113','tp_links')\" style=\"cursor:pointer;\">Randomized neural networks for preference learning with physiological data<\/a> <span class=\"tp_pub_type article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">Neurocomputing, <\/span><span class=\"tp_pub_additional_volume\">vol. 298, <\/span><span class=\"tp_pub_additional_pages\">pp. 9-20, <\/span><span class=\"tp_pub_additional_year\">2018<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_113\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('113','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_113\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('113','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_113\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('113','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_113\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{neurocomp2017,<br \/>\r\ntitle = {Randomized neural networks for preference learning with physiological data},<br \/>\r\nauthor = {Bacciu Davide and Colombo Michele and Morelli Davide and Plans David},<br \/>\r\neditor = {Fabio Aiolli and Luca Oneto and Michael Biehl },<br \/>\r\nurl = {https:\/\/authors.elsevier.com\/a\/1Wxbz_L2Otpsb3},<br \/>\r\ndoi = {10.1016\/j.neucom.2017.11.070},<br \/>\r\nyear  = {2018},<br \/>\r\ndate = {2018-07-12},<br \/>\r\njournal = {Neurocomputing},<br \/>\r\nvolume = {298},<br \/>\r\npages = {9-20},<br \/>\r\nabstract = {The paper discusses the use of randomized neural networks to learn a complete ordering between samples of heart-rate variability data by relying solely on partial and subject-dependent information concerning pairwise relations between samples. We confront two approaches, i.e. Extreme Learning Machines and Echo State Networks, assessing the effectiveness in exploiting hand-engineered heart-rate variability features versus using raw beat-to-beat sequential data. Additionally, we introduce a weight sharing architecture and a preference learning error function whose performance is compared with a standard architecture realizing pairwise ranking as a binary-classification task. The models are evaluated on real-world data from a mobile application realizing a guided breathing exercise, using a dataset of over 54K exercising sessions. Results show how a randomized neural model processing information in its raw sequential form can outperform its vectorial counterpart, increasing accuracy in predicting the correct sample ordering by about 20%.  Further, the experiments highlight the importance of using weight sharing architectures to learn smooth and generalizable complete orders induced by the preference relation.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('113','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_113\" style=\"display:none;\"><div class=\"tp_abstract_entry\">The paper discusses the use of randomized neural networks to learn a complete ordering between samples of heart-rate variability data by relying solely on partial and subject-dependent information concerning pairwise relations between samples. We confront two approaches, i.e. Extreme Learning Machines and Echo State Networks, assessing the effectiveness in exploiting hand-engineered heart-rate variability features versus using raw beat-to-beat sequential data. Additionally, we introduce a weight sharing architecture and a preference learning error function whose performance is compared with a standard architecture realizing pairwise ranking as a binary-classification task. The models are evaluated on real-world data from a mobile application realizing a guided breathing exercise, using a dataset of over 54K exercising sessions. Results show how a randomized neural model processing information in its raw sequential form can outperform its vectorial counterpart, increasing accuracy in predicting the correct sample ordering by about 20%.  Further, the experiments highlight the importance of using weight sharing architectures to learn smooth and generalizable complete orders induced by the preference relation.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('113','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_113\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"fas fa-globe\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/authors.elsevier.com\/a\/1Wxbz_L2Otpsb3\" title=\"https:\/\/authors.elsevier.com\/a\/1Wxbz_L2Otpsb3\" target=\"_blank\">https:\/\/authors.elsevier.com\/a\/1Wxbz_L2Otpsb3<\/a><\/li><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.1016\/j.neucom.2017.11.070\" title=\"Follow DOI:10.1016\/j.neucom.2017.11.070\" target=\"_blank\">doi:10.1016\/j.neucom.2017.11.070<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('113','tp_links')\">Close<\/a><\/p><\/div><\/div><div class=\"tp_pub_image_right\"><img decoding=\"async\" name=\"Randomized neural networks for preference learning with physiological data\" src=\"https:\/\/secure-ecsd.elsevier.com\/covers\/80\/Tango2\/large\/09252312.jpg\" width=\"80\" alt=\"Randomized neural networks for preference learning with physiological data\" \/><\/div><\/div><div class=\"tp_publication tp_publication_article\"><div class=\"tp_pub_number\">44.<\/div><div class=\"tp_pub_info\"><p class=\"tp_pub_author\"> Davide, Bacciu;  Alessio, Micheli;  Alessandro, Sperduti<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('114','tp_links')\" style=\"cursor:pointer;\">Generative Kernels for Tree-Structured Data<\/a> <span class=\"tp_pub_type article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">Neural Networks and Learning Systems, IEEE Transactions on, <\/span><span class=\"tp_pub_additional_year\">2018<\/span>, <span class=\"tp_pub_additional_issn\">ISSN: 2162-2388 <\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_114\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('114','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_114\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('114','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_114\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('114','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_114\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{tnnlsTreeKer17,<br \/>\r\ntitle = {Generative Kernels for Tree-Structured Data},<br \/>\r\nauthor = {Bacciu Davide and Micheli Alessio and Sperduti Alessandro},<br \/>\r\ndoi = {10.1109\/TNNLS.2017.2785292},<br \/>\r\nissn = {2162-2388 },<br \/>\r\nyear  = {2018},<br \/>\r\ndate = {2018-01-15},<br \/>\r\njournal = {Neural Networks and Learning Systems, IEEE Transactions on},<br \/>\r\nabstract = {The paper presents a family of methods for the design of adaptive kernels for tree-structured data that exploits the summarization properties of hidden states of hidden Markov models for trees. We introduce a compact and discriminative feature space based on the concept of hidden states multisets and we discuss different approaches to estimate such hidden state encoding. We show how it can be used to build an efficient and general tree kernel based on Jaccard similarity. Further, we derive an unsupervised convolutional generative kernel using a topology induced on the Markov states by a tree topographic mapping. The paper provides an extensive empirical assessment on a variety of structured data learning tasks, comparing the predictive accuracy and computational efficiency of state-of-the-art generative, adaptive and syntactical tree kernels. The results show that the proposed generative approach has a good tradeoff between computational complexity and predictive performance, in particular when considering the soft matching introduced by the topographic mapping.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('114','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_114\" style=\"display:none;\"><div class=\"tp_abstract_entry\">The paper presents a family of methods for the design of adaptive kernels for tree-structured data that exploits the summarization properties of hidden states of hidden Markov models for trees. We introduce a compact and discriminative feature space based on the concept of hidden states multisets and we discuss different approaches to estimate such hidden state encoding. We show how it can be used to build an efficient and general tree kernel based on Jaccard similarity. Further, we derive an unsupervised convolutional generative kernel using a topology induced on the Markov states by a tree topographic mapping. The paper provides an extensive empirical assessment on a variety of structured data learning tasks, comparing the predictive accuracy and computational efficiency of state-of-the-art generative, adaptive and syntactical tree kernels. The results show that the proposed generative approach has a good tradeoff between computational complexity and predictive performance, in particular when considering the soft matching introduced by the topographic mapping.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('114','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_114\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.1109\/TNNLS.2017.2785292\" title=\"Follow DOI:10.1109\/TNNLS.2017.2785292\" target=\"_blank\">doi:10.1109\/TNNLS.2017.2785292<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('114','tp_links')\">Close<\/a><\/p><\/div><\/div><div class=\"tp_pub_image_right\"><img decoding=\"async\" name=\"Generative Kernels for Tree-Structured Data\" src=\"https:\/\/cis.ieee.org\/images\/files\/Publications\/TNNLS\/tnnls.jpg\" width=\"80\" alt=\"Generative Kernels for Tree-Structured Data\" \/><\/div><\/div><h3 class=\"tp_h3\" id=\"tp_h3_2017\">2017<\/h3><div class=\"tp_publication tp_publication_article\"><div class=\"tp_pub_number\">45.<\/div><div class=\"tp_pub_info\"><p class=\"tp_pub_author\"> Davide, Bacciu;  Stefano, Chessa;  Claudio, Gallicchio;  Alessio, Micheli;  Luca, Pedrelli;  Erina, Ferro;  Luigi, Fortunati;  Davide, La Rosa;  Filippo, Palumbo;  Federico, Vozzi;  Oberdan, Parodi<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('112','tp_links')\" style=\"cursor:pointer;\">A Learning System for Automatic Berg Balance Scale Score Estimation<\/a> <span class=\"tp_pub_type article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">Engineering Applications of Artificial Intelligence journal, <\/span><span class=\"tp_pub_additional_volume\">vol. 66, <\/span><span class=\"tp_pub_additional_pages\">pp. 60-74, <\/span><span class=\"tp_pub_additional_year\">2017<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_112\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('112','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_112\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('112','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_112\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('112','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_112\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{eaai2017,<br \/>\r\ntitle = {A Learning System for Automatic Berg Balance Scale Score Estimation},<br \/>\r\nauthor = {Bacciu Davide and Chessa Stefano and Gallicchio Claudio and Micheli Alessio and Pedrelli Luca and Ferro Erina and Fortunati Luigi and La Rosa Davide and Palumbo Filippo and Vozzi Federico and Parodi Oberdan},<br \/>\r\nurl = {http:\/\/www.sciencedirect.com\/science\/article\/pii\/S0952197617302026},<br \/>\r\ndoi = {https:\/\/doi.org\/10.1016\/j.engappai.2017.08.018},<br \/>\r\nyear  = {2017},<br \/>\r\ndate = {2017-08-24},<br \/>\r\nurldate = {2017-08-24},<br \/>\r\njournal = {Engineering Applications of Artificial Intelligence journal},<br \/>\r\nvolume = {66},<br \/>\r\npages = {60-74},<br \/>\r\nabstract = {The objective of this work is the development of a learning system for the automatic assessment of balance abilities in elderly people. The system is based on estimating the Berg Balance Scale (BBS) score from the stream of sensor data gathered by a Wii Balance Board. The scientific challenge tackled by our investigation is to assess the feasibility of exploiting the richness of the temporal signals gathered by the balance board for inferring the complete BBS score based on data from a single BBS exercise.<br \/>\r\n<br \/>\r\nThe relation between the data collected by the balance board and the BBS score is inferred by neural networks for temporal data, modeled in particular as Echo State Networks within the Reservoir Computing (RC) paradigm, as a result of a comprehensive comparison among different learning models. The proposed system results to be able to estimate the complete BBS score directly from temporal data on exercise #10 of the BBS test, with \u2248\u224810 s of duration. Experimental results on real-world data show an absolute error below 4 BBS score points (i.e. below the 7% of the whole BBS range), resulting in a favorable trade-off between predictive performance and user\u2019s required time with respect to previous works in literature. Results achieved by RC models compare well also with respect to different related learning models.<br \/>\r\n<br \/>\r\nOverall, the proposed system puts forward as an effective tool for an accurate automated assessment of balance abilities in the elderly and it is characterized by being unobtrusive, easy to use and suitable for autonomous usage.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('112','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_112\" style=\"display:none;\"><div class=\"tp_abstract_entry\">The objective of this work is the development of a learning system for the automatic assessment of balance abilities in elderly people. The system is based on estimating the Berg Balance Scale (BBS) score from the stream of sensor data gathered by a Wii Balance Board. The scientific challenge tackled by our investigation is to assess the feasibility of exploiting the richness of the temporal signals gathered by the balance board for inferring the complete BBS score based on data from a single BBS exercise.<br \/>\r\n<br \/>\r\nThe relation between the data collected by the balance board and the BBS score is inferred by neural networks for temporal data, modeled in particular as Echo State Networks within the Reservoir Computing (RC) paradigm, as a result of a comprehensive comparison among different learning models. The proposed system results to be able to estimate the complete BBS score directly from temporal data on exercise #10 of the BBS test, with \u2248\u224810 s of duration. Experimental results on real-world data show an absolute error below 4 BBS score points (i.e. below the 7% of the whole BBS range), resulting in a favorable trade-off between predictive performance and user\u2019s required time with respect to previous works in literature. Results achieved by RC models compare well also with respect to different related learning models.<br \/>\r\n<br \/>\r\nOverall, the proposed system puts forward as an effective tool for an accurate automated assessment of balance abilities in the elderly and it is characterized by being unobtrusive, easy to use and suitable for autonomous usage.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('112','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_112\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"fas fa-globe\"><\/i><a class=\"tp_pub_list\" href=\"http:\/\/www.sciencedirect.com\/science\/article\/pii\/S0952197617302026\" title=\"http:\/\/www.sciencedirect.com\/science\/article\/pii\/S0952197617302026\" target=\"_blank\">http:\/\/www.sciencedirect.com\/science\/article\/pii\/S0952197617302026<\/a><\/li><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/https:\/\/doi.org\/10.1016\/j.engappai.2017.08.018\" title=\"Follow DOI:https:\/\/doi.org\/10.1016\/j.engappai.2017.08.018\" target=\"_blank\">doi:https:\/\/doi.org\/10.1016\/j.engappai.2017.08.018<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('112','tp_links')\">Close<\/a><\/p><\/div><\/div><div class=\"tp_pub_image_right\"><img decoding=\"async\" name=\"A Learning System for Automatic Berg Balance Scale Score Estimation\" src=\"http:\/\/pages.di.unipi.it\/bacciu\/wp-content\/uploads\/sites\/12\/2024\/01\/eaai.jpg\" width=\"80\" alt=\"A Learning System for Automatic Berg Balance Scale Score Estimation\" \/><\/div><\/div><div class=\"tp_publication tp_publication_article\"><div class=\"tp_pub_number\">46.<\/div><div class=\"tp_pub_info\"><p class=\"tp_pub_author\"> Filippo, Palumbo;  Davide, La Rosa;  Erina, Ferro;  Davide, Bacciu;  Claudio, Gallicchio;  Alession, Micheli;  Stefano, Chessa;  Federico, Vozzi;  Oberdan, Parodi<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('109','tp_links')\" style=\"cursor:pointer;\">Reliability and human factors in Ambient Assisted Living environments: The DOREMI case study<\/a> <span class=\"tp_pub_type article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">Journal of Reliable Intelligent Environments, <\/span><span class=\"tp_pub_additional_volume\">vol. 3, <\/span><span class=\"tp_pub_additional_number\">no. 3, <\/span><span class=\"tp_pub_additional_pages\">pp. 139\u2013157, <\/span><span class=\"tp_pub_additional_year\">2017<\/span>, <span class=\"tp_pub_additional_isbn\">ISBN: 2199-4668<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_109\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('109','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_109\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('109','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_109\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('109','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_109\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{jrie2017,<br \/>\r\ntitle = {Reliability and human factors in Ambient Assisted Living environments: The DOREMI case study},<br \/>\r\nauthor = {Palumbo Filippo and La Rosa Davide and Ferro Erina and Bacciu Davide and Gallicchio Claudio and Micheli Alession and Chessa Stefano and Vozzi Federico and Parodi Oberdan},<br \/>\r\ndoi = {10.1007\/s40860-017-0042-1},<br \/>\r\nisbn = {2199-4668},<br \/>\r\nyear  = {2017},<br \/>\r\ndate = {2017-06-17},<br \/>\r\njournal = {Journal of Reliable Intelligent Environments},<br \/>\r\nvolume = {3},<br \/>\r\nnumber = {3},<br \/>\r\npages = {139\u2013157},<br \/>\r\npublisher = {Springer},<br \/>\r\nabstract = {Malnutrition, sedentariness, and cognitive decline in elderly people represent the target areas addressed by the DOREMI project. It aimed at developing a systemic solution for elderly, able to prolong their functional and cognitive capacity by empowering, stimulating, and unobtrusively monitoring the daily activities according to well-defined \u201cActive Ageing\u201d life-style protocols. Besides the key features of DOREMI in terms of technological and medical protocol solutions, this work is focused on the analysis of the impact of such a solution on the daily life of users and how the users\u2019 behaviour modifies the expected results of the system in a long-term perspective. To this end, we analyse the reliability of the whole system in terms of human factors and their effects on the reliability requirements identified before starting the experimentation in the pilot sites. After giving an overview of the technological solutions we adopted in the project, this paper concentrates on the activities conducted during the two pilot site studies (32 test sites across UK and Italy), the users\u2019 experience of the entire system, and how human factors influenced its overall reliability.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('109','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_109\" style=\"display:none;\"><div class=\"tp_abstract_entry\">Malnutrition, sedentariness, and cognitive decline in elderly people represent the target areas addressed by the DOREMI project. It aimed at developing a systemic solution for elderly, able to prolong their functional and cognitive capacity by empowering, stimulating, and unobtrusively monitoring the daily activities according to well-defined \u201cActive Ageing\u201d life-style protocols. Besides the key features of DOREMI in terms of technological and medical protocol solutions, this work is focused on the analysis of the impact of such a solution on the daily life of users and how the users\u2019 behaviour modifies the expected results of the system in a long-term perspective. To this end, we analyse the reliability of the whole system in terms of human factors and their effects on the reliability requirements identified before starting the experimentation in the pilot sites. After giving an overview of the technological solutions we adopted in the project, this paper concentrates on the activities conducted during the two pilot site studies (32 test sites across UK and Italy), the users\u2019 experience of the entire system, and how human factors influenced its overall reliability.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('109','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_109\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.1007\/s40860-017-0042-1\" title=\"Follow DOI:10.1007\/s40860-017-0042-1\" target=\"_blank\">doi:10.1007\/s40860-017-0042-1<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('109','tp_links')\">Close<\/a><\/p><\/div><\/div><div class=\"tp_pub_image_right\"><\/div><\/div><div class=\"tp_publication tp_publication_article\"><div class=\"tp_pub_number\">47.<\/div><div class=\"tp_pub_info\"><p class=\"tp_pub_author\"> Davide, Bacciu;  Antonio, Carta;  Stefania, Gnesi;  Laura, Semini<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('105','tp_links')\" style=\"cursor:pointer;\">An Experience in using Machine Learning for Short-term Predictions in Smart Transportation Systems<\/a> <span class=\"tp_pub_type article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\"> Journal of Logical and Algebraic Methods in Programming , <\/span><span class=\"tp_pub_additional_volume\">vol. 87, <\/span><span class=\"tp_pub_additional_pages\">pp. 52-66, <\/span><span class=\"tp_pub_additional_year\">2017<\/span>, <span class=\"tp_pub_additional_issn\">ISSN: 2352-2208<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_105\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('105','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_105\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('105','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_105\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('105','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_105\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{jlamp2016,<br \/>\r\ntitle = {An Experience in using Machine Learning for Short-term Predictions in Smart Transportation Systems},<br \/>\r\nauthor = {Bacciu Davide and Carta Antonio and Gnesi Stefania and Semini Laura},<br \/>\r\neditor = {Alberto Lluch Lafuente and Maurice ter Beek},<br \/>\r\ndoi = {10.1016\/j.jlamp.2016.11.002},<br \/>\r\nissn = {2352-2208},<br \/>\r\nyear  = {2017},<br \/>\r\ndate = {2017-01-01},<br \/>\r\njournal = { Journal of Logical and Algebraic Methods in Programming },<br \/>\r\nvolume = {87},<br \/>\r\npages = {52-66},<br \/>\r\npublisher = {Elsevier},<br \/>\r\nabstract = {Bike-sharing systems (BSS) are a means of smart transportation with the benefit of a positive impact on urban mobility. To improve the satisfaction of a user of a BSS, it is useful to inform her\/him on the status of the stations at run time, and indeed most of the current systems provide the information in terms of number of bicycles parked in each docking stations by means of services available via web. However, when the departure station is empty, the user could also be happy to know how the situation will evolve and, in particular, if a bike is going to arrive (and vice versa when the arrival station is full).<br \/>\r\nTo fulfill this expectation, we envisage services able to make a prediction and infer if there is in use a bike that could be, with high probability, returned at the station where she\/he is waiting. The goal of this paper is hence to analyze the feasibility of these services. To this end, we put forward the idea of using Machine Learning methodologies, proposing and comparing different solutions.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('105','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_105\" style=\"display:none;\"><div class=\"tp_abstract_entry\">Bike-sharing systems (BSS) are a means of smart transportation with the benefit of a positive impact on urban mobility. To improve the satisfaction of a user of a BSS, it is useful to inform her\/him on the status of the stations at run time, and indeed most of the current systems provide the information in terms of number of bicycles parked in each docking stations by means of services available via web. However, when the departure station is empty, the user could also be happy to know how the situation will evolve and, in particular, if a bike is going to arrive (and vice versa when the arrival station is full).<br \/>\r\nTo fulfill this expectation, we envisage services able to make a prediction and infer if there is in use a bike that could be, with high probability, returned at the station where she\/he is waiting. The goal of this paper is hence to analyze the feasibility of these services. To this end, we put forward the idea of using Machine Learning methodologies, proposing and comparing different solutions.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('105','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_105\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.1016\/j.jlamp.2016.11.002\" title=\"Follow DOI:10.1016\/j.jlamp.2016.11.002\" target=\"_blank\">doi:10.1016\/j.jlamp.2016.11.002<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('105','tp_links')\">Close<\/a><\/p><\/div><\/div><div class=\"tp_pub_image_right\"><\/div><\/div><h3 class=\"tp_h3\" id=\"tp_h3_2016\">2016<\/h3><div class=\"tp_publication tp_publication_article\"><div class=\"tp_pub_number\">48.<\/div><div class=\"tp_pub_info\"><p class=\"tp_pub_author\"> Davide, Bacciu<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('11','tp_links')\" style=\"cursor:pointer;\">Unsupervised feature selection for sensor time-series in pervasive computing applications<\/a> <span class=\"tp_pub_type article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">Neural Computing and Applications, <\/span><span class=\"tp_pub_additional_volume\">vol. 27, <\/span><span class=\"tp_pub_additional_number\">no. 5, <\/span><span class=\"tp_pub_additional_pages\">pp. 1077-1091, <\/span><span class=\"tp_pub_additional_year\">2016<\/span>, <span class=\"tp_pub_additional_issn\">ISSN: 1433-3058<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_11\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('11','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_11\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('11','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_11\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('11','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_11\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{icfNca15,<br \/>\r\ntitle = {Unsupervised feature selection for sensor time-series in pervasive computing applications},<br \/>\r\nauthor = {Bacciu Davide},<br \/>\r\nurl = {http:\/\/pages.di.unipi.it\/bacciu\/wp-content\/uploads\/sites\/12\/2016\/04\/nca2015.pdf},<br \/>\r\ndoi = {10.1007\/s00521-015-1924-x},<br \/>\r\nissn = {1433-3058},<br \/>\r\nyear  = {2016},<br \/>\r\ndate = {2016-07-01},<br \/>\r\nurldate = {2016-07-01},<br \/>\r\njournal = {Neural Computing and Applications},<br \/>\r\nvolume = {27},<br \/>\r\nnumber = {5},<br \/>\r\npages = {1077-1091},<br \/>\r\npublisher = {Springer London},<br \/>\r\nabstract = {The paper introduces an efficient feature selection approach for multivariate time-series of heterogeneous sensor data within a pervasive computing scenario. An iterative filtering procedure is devised to reduce information redundancy measured in terms of time-series cross-correlation. The algorithm is capable of identifying nonredundant sensor sources in an unsupervised fashion even in presence of a large proportion of noisy features. In particular, the proposed feature selection process does not require expert intervention to determine the number of selected features, which is a key advancement with respect to time-series filters in the literature. The characteristic of the prosed algorithm allows enriching learning systems, in pervasive computing applications, with a fully automatized feature selection mechanism which can be triggered and performed at run time during system operation. A comparative experimental analysis on real-world data from three pervasive computing applications is provided, showing that the algorithm addresses major limitations of unsupervised filters in the literature when dealing with sensor time-series. Specifically, it is presented an assessment both in terms of reduction of time-series redundancy and in terms of preservation of informative features with respect to associated supervised learning tasks.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('11','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_11\" style=\"display:none;\"><div class=\"tp_abstract_entry\">The paper introduces an efficient feature selection approach for multivariate time-series of heterogeneous sensor data within a pervasive computing scenario. An iterative filtering procedure is devised to reduce information redundancy measured in terms of time-series cross-correlation. The algorithm is capable of identifying nonredundant sensor sources in an unsupervised fashion even in presence of a large proportion of noisy features. In particular, the proposed feature selection process does not require expert intervention to determine the number of selected features, which is a key advancement with respect to time-series filters in the literature. The characteristic of the prosed algorithm allows enriching learning systems, in pervasive computing applications, with a fully automatized feature selection mechanism which can be triggered and performed at run time during system operation. A comparative experimental analysis on real-world data from three pervasive computing applications is provided, showing that the algorithm addresses major limitations of unsupervised filters in the literature when dealing with sensor time-series. Specifically, it is presented an assessment both in terms of reduction of time-series redundancy and in terms of preservation of informative features with respect to associated supervised learning tasks.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('11','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_11\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"fas fa-file-pdf\"><\/i><a class=\"tp_pub_list\" href=\"http:\/\/pages.di.unipi.it\/bacciu\/wp-content\/uploads\/sites\/12\/2016\/04\/nca2015.pdf\" title=\"http:\/\/pages.di.unipi.it\/bacciu\/wp-content\/uploads\/sites\/12\/2016\/04\/nca2015.pdf\" target=\"_blank\">http:\/\/pages.di.unipi.it\/bacciu\/wp-content\/uploads\/sites\/12\/2016\/04\/nca2015.pdf<\/a><\/li><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.1007\/s00521-015-1924-x\" title=\"Follow DOI:10.1007\/s00521-015-1924-x\" target=\"_blank\">doi:10.1007\/s00521-015-1924-x<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('11','tp_links')\">Close<\/a><\/p><\/div><\/div><div class=\"tp_pub_image_right\"><img decoding=\"async\" name=\"Unsupervised feature selection for sensor time-series in pervasive computing applications\" src=\"http:\/\/pages.di.unipi.it\/bacciu\/wp-content\/uploads\/sites\/12\/2024\/01\/nca.jpg\" width=\"80\" alt=\"Unsupervised feature selection for sensor time-series in pervasive computing applications\" \/><\/div><\/div><h3 class=\"tp_h3\" id=\"tp_h3_2015\">2015<\/h3><div class=\"tp_publication tp_publication_article\"><div class=\"tp_pub_number\">49.<\/div><div class=\"tp_pub_info\"><p class=\"tp_pub_author\"> Giuseppe, Amato;  Davide, Bacciu;  Mathias, Broxvall;  Stefano, Chessa;  Sonya, Coleman;  Maurizio, Di Rocco;  Mauro, Dragone;  Claudio, Gallicchio;  Claudio, Gennaro;  Hector, Lozano;  Martin, McGinnity T;  Alessio, Micheli;  AK, Ray;  Arantxa, Renteria;  Alessandro, Saffiotti;  David, Swords;  Claudio, Vairo;  Philip, Vance<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('10','tp_links')\" style=\"cursor:pointer;\">Robotic Ubiquitous Cognitive Ecology for Smart Homes<\/a> <span class=\"tp_pub_type article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">Journal of Intelligent &amp; Robotic Systems, <\/span><span class=\"tp_pub_additional_volume\">vol. 80, <\/span><span class=\"tp_pub_additional_number\">no. 1, <\/span><span class=\"tp_pub_additional_pages\">pp. 57-81, <\/span><span class=\"tp_pub_additional_year\">2015<\/span>, <span class=\"tp_pub_additional_issn\">ISSN: 0921-0296<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_10\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('10','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_10\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('10','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_10\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('10','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_10\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{bacciuJirs15,<br \/>\r\ntitle = {Robotic Ubiquitous Cognitive Ecology for Smart Homes},<br \/>\r\nauthor = {Amato Giuseppe and Bacciu Davide and Broxvall Mathias and Chessa Stefano and Coleman Sonya and Di Rocco Maurizio and Dragone Mauro and Gallicchio Claudio and Gennaro Claudio and Lozano Hector and McGinnity T Martin and Micheli Alessio and Ray AK and Renteria Arantxa and Saffiotti Alessandro and Swords David and Vairo Claudio and Vance Philip},<br \/>\r\nurl = {http:\/\/dx.doi.org\/10.1007\/s10846-015-0178-2},<br \/>\r\ndoi = {10.1007\/s10846-015-0178-2},<br \/>\r\nissn = {0921-0296},<br \/>\r\nyear  = {2015},<br \/>\r\ndate = {2015-01-01},<br \/>\r\njournal = {Journal of Intelligent & Robotic Systems},<br \/>\r\nvolume = {80},<br \/>\r\nnumber = {1},<br \/>\r\npages = {57-81},<br \/>\r\npublisher = {Springer Netherlands},<br \/>\r\nabstract = {Robotic ecologies are networks of heterogeneous robotic devices pervasively embedded in everyday environments, where they cooperate to perform complex tasks. While their potential makes them increasingly popular, one fundamental problem is how to make them both autonomous and adaptive, so as to reduce the amount of preparation, pre-programming and human supervision that they require in real world applications. The project RUBICON develops learning solutions which yield cheaper, adaptive and efficient coordination of robotic ecologies. The approach we pursue builds upon a unique combination of methods from cognitive robotics, machine learning, planning and agent-based control, and wireless sensor networks. This paper illustrates the innovations advanced by RUBICON in each of these fronts before describing how the resulting techniques have been integrated and applied to a proof of concept smart home scenario. The resulting system is able to provide useful services and pro-actively assist the users in their activities. RUBICON learns through an incremental and progressive approach driven by the feedback received from its own activities and from the user, while also self-organizing the manner in which it uses available sensors, actuators and other functional components in the process. This paper summarises some of the lessons learned by adopting such an approach and outlines promising directions for future work.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('10','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_10\" style=\"display:none;\"><div class=\"tp_abstract_entry\">Robotic ecologies are networks of heterogeneous robotic devices pervasively embedded in everyday environments, where they cooperate to perform complex tasks. While their potential makes them increasingly popular, one fundamental problem is how to make them both autonomous and adaptive, so as to reduce the amount of preparation, pre-programming and human supervision that they require in real world applications. The project RUBICON develops learning solutions which yield cheaper, adaptive and efficient coordination of robotic ecologies. The approach we pursue builds upon a unique combination of methods from cognitive robotics, machine learning, planning and agent-based control, and wireless sensor networks. This paper illustrates the innovations advanced by RUBICON in each of these fronts before describing how the resulting techniques have been integrated and applied to a proof of concept smart home scenario. The resulting system is able to provide useful services and pro-actively assist the users in their activities. RUBICON learns through an incremental and progressive approach driven by the feedback received from its own activities and from the user, while also self-organizing the manner in which it uses available sensors, actuators and other functional components in the process. This paper summarises some of the lessons learned by adopting such an approach and outlines promising directions for future work.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('10','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_10\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"fas fa-globe\"><\/i><a class=\"tp_pub_list\" href=\"http:\/\/dx.doi.org\/10.1007\/s10846-015-0178-2\" title=\"http:\/\/dx.doi.org\/10.1007\/s10846-015-0178-2\" target=\"_blank\">http:\/\/dx.doi.org\/10.1007\/s10846-015-0178-2<\/a><\/li><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.1007\/s10846-015-0178-2\" title=\"Follow DOI:10.1007\/s10846-015-0178-2\" target=\"_blank\">doi:10.1007\/s10846-015-0178-2<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('10','tp_links')\">Close<\/a><\/p><\/div><\/div><div class=\"tp_pub_image_right\"><\/div><\/div><div class=\"tp_publication tp_publication_article\"><div class=\"tp_pub_number\">50.<\/div><div class=\"tp_pub_info\"><p class=\"tp_pub_author\"> Mauro, Dragone;  Giuseppe, Amato;  Davide, Bacciu;  Stefano, Chessa;  Sonya, Coleman;  Maurizio, Di Rocco;  Claudio, Gallicchio;  Claudio, Gennaro;  Hector, Lozano;  Liam, Maguire;  Martin, McGinnity;  Alessio, Micheli;  M.P., O'Hare Gregory;  Arantxa, Renteria;  Alessandro, Saffiotti;  Claudio, Vairo;  Philip, Vance<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('12','tp_links')\" style=\"cursor:pointer;\">A Cognitive Robotic Ecology Approach to Self-configuring and Evolving AAL Systems<\/a> <span class=\"tp_pub_type article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">Engineering Applications of Artificial Intelligence, <\/span><span class=\"tp_pub_additional_volume\">vol. 45, <\/span><span class=\"tp_pub_additional_number\">no. C, <\/span><span class=\"tp_pub_additional_pages\">pp. 269\u2013280, <\/span><span class=\"tp_pub_additional_year\">2015<\/span>, <span class=\"tp_pub_additional_issn\">ISSN: 0952-1976<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_resource_link\"><a id=\"tp_links_sh_12\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('12','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_12\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('12','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_12\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{Dragone:2015:CRE:2827370.2827596,<br \/>\r\ntitle = {A Cognitive Robotic Ecology Approach to Self-configuring and Evolving AAL Systems},<br \/>\r\nauthor = {Dragone Mauro and Amato Giuseppe and Bacciu Davide and Chessa Stefano and Coleman Sonya and Di Rocco Maurizio and Gallicchio Claudio and Gennaro Claudio and Lozano Hector and Maguire Liam and McGinnity Martin and Micheli Alessio and O'Hare Gregory M.P. and Renteria Arantxa and Saffiotti Alessandro and Vairo Claudio and Vance Philip},<br \/>\r\nurl = {http:\/\/dx.doi.org\/10.1016\/j.engappai.2015.07.004},<br \/>\r\ndoi = {10.1016\/j.engappai.2015.07.004},<br \/>\r\nissn = {0952-1976},<br \/>\r\nyear  = {2015},<br \/>\r\ndate = {2015-01-01},<br \/>\r\nurldate = {2015-01-01},<br \/>\r\njournal = {Engineering Applications of Artificial Intelligence},<br \/>\r\nvolume = {45},<br \/>\r\nnumber = {C},<br \/>\r\npages = {269--280},<br \/>\r\npublisher = {Pergamon Press, Inc.},<br \/>\r\naddress = {Tarrytown, NY, USA},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('12','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_12\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"fas fa-globe\"><\/i><a class=\"tp_pub_list\" href=\"http:\/\/dx.doi.org\/10.1016\/j.engappai.2015.07.004\" title=\"http:\/\/dx.doi.org\/10.1016\/j.engappai.2015.07.004\" target=\"_blank\">http:\/\/dx.doi.org\/10.1016\/j.engappai.2015.07.004<\/a><\/li><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.1016\/j.engappai.2015.07.004\" title=\"Follow DOI:10.1016\/j.engappai.2015.07.004\" target=\"_blank\">doi:10.1016\/j.engappai.2015.07.004<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('12','tp_links')\">Close<\/a><\/p><\/div><\/div><div class=\"tp_pub_image_right\"><img decoding=\"async\" name=\"A Cognitive Robotic Ecology Approach to Self-configuring and Evolving AAL Systems\" src=\"http:\/\/pages.di.unipi.it\/bacciu\/wp-content\/uploads\/sites\/12\/2024\/01\/eaai.jpg\" width=\"80\" alt=\"A Cognitive Robotic Ecology Approach to Self-configuring and Evolving AAL Systems\" \/><\/div><\/div><\/div><div class=\"tablenav\"><div class=\"tablenav-pages\"><span class=\"displaying-num\">59 entries<\/span> <a class=\"page-numbers button disabled\">&laquo;<\/a> <a class=\"page-numbers button disabled\">&lsaquo;<\/a> 1 of 2 <a href=\"https:\/\/pages.di.unipi.it\/bacciu\/publications\/journals\/?limit=2&amp;tgid=&amp;yr=&amp;type=&amp;usr=&amp;auth=&amp;tsr=#tppubs\" title=\"next page\" class=\"page-numbers button\">&rsaquo;<\/a> <a href=\"https:\/\/pages.di.unipi.it\/bacciu\/publications\/journals\/?limit=2&amp;tgid=&amp;yr=&amp;type=&amp;usr=&amp;auth=&amp;tsr=#tppubs\" title=\"last page\" class=\"page-numbers button\">&raquo;<\/a> <\/div><\/div><\/div><\/code><\/p>\n","protected":false},"excerpt":{"rendered":"","protected":false},"author":19,"featured_media":0,"parent":13,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"footnotes":""},"class_list":["post-1344","page","type-page","status-publish","hentry"],"_links":{"self":[{"href":"https:\/\/pages.di.unipi.it\/bacciu\/wp-json\/wp\/v2\/pages\/1344","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/pages.di.unipi.it\/bacciu\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/pages.di.unipi.it\/bacciu\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/pages.di.unipi.it\/bacciu\/wp-json\/wp\/v2\/users\/19"}],"replies":[{"embeddable":true,"href":"https:\/\/pages.di.unipi.it\/bacciu\/wp-json\/wp\/v2\/comments?post=1344"}],"version-history":[{"count":6,"href":"https:\/\/pages.di.unipi.it\/bacciu\/wp-json\/wp\/v2\/pages\/1344\/revisions"}],"predecessor-version":[{"id":1483,"href":"https:\/\/pages.di.unipi.it\/bacciu\/wp-json\/wp\/v2\/pages\/1344\/revisions\/1483"}],"up":[{"embeddable":true,"href":"https:\/\/pages.di.unipi.it\/bacciu\/wp-json\/wp\/v2\/pages\/13"}],"wp:attachment":[{"href":"https:\/\/pages.di.unipi.it\/bacciu\/wp-json\/wp\/v2\/media?parent=1344"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}