{"id":1410,"date":"2024-01-02T10:56:36","date_gmt":"2024-01-02T09:56:36","guid":{"rendered":"http:\/\/pages.di.unipi.it\/bacciu\/?page_id=1410"},"modified":"2024-01-02T11:56:45","modified_gmt":"2024-01-02T10:56:45","slug":"all","status":"publish","type":"page","link":"https:\/\/pages.di.unipi.it\/bacciu\/publications\/all\/","title":{"rendered":"All"},"content":{"rendered":"\n<p><code><div class=\"teachpress_pub_list\"><form name=\"tppublistform\" method=\"get\"><a name=\"tppubs\" id=\"tppubs\"><\/a><\/form><div class=\"tablenav\"><div class=\"tablenav-pages\"><span class=\"displaying-num\">224 entries<\/span> <a class=\"page-numbers button disabled\">&laquo;<\/a> <a class=\"page-numbers button disabled\">&lsaquo;<\/a> 1 of 5 <a href=\"https:\/\/pages.di.unipi.it\/bacciu\/publications\/all\/?limit=2&amp;tgid=&amp;yr=&amp;type=&amp;usr=&amp;auth=&amp;tsr=#tppubs\" title=\"next page\" class=\"page-numbers button\">&rsaquo;<\/a> <a href=\"https:\/\/pages.di.unipi.it\/bacciu\/publications\/all\/?limit=5&amp;tgid=&amp;yr=&amp;type=&amp;usr=&amp;auth=&amp;tsr=#tppubs\" title=\"last page\" class=\"page-numbers button\">&raquo;<\/a> <\/div><\/div><div class=\"teachpress_publication_list\"><h3 class=\"tp_h3\" id=\"tp_h3_2024\">2024<\/h3><div class=\"tp_publication tp_publication_conference\"><div class=\"tp_pub_number\">1.<\/div><div class=\"tp_pub_info\"><p class=\"tp_pub_author\"> Gravina, Alessio;  Zambon, Daniele;  Bacciu, Davide;  Alippi, Cesare<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('261','tp_links')\" style=\"cursor:pointer;\">Temporal Graph ODEs for Irregularly-Sampled Time Series<\/a> <span class=\"tp_pub_type conference\">Conference<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_booktitle\">Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI 2024), <\/span><span class=\"tp_pub_additional_year\">2024<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_261\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('261','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_261\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('261','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_261\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('261','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_261\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@conference{nokey,<br \/>\r\ntitle = {Temporal Graph ODEs for Irregularly-Sampled Time Series},<br \/>\r\nauthor = {Alessio Gravina and Daniele Zambon and Davide Bacciu and Cesare Alippi},<br \/>\r\nurl = {https:\/\/arxiv.org\/abs\/2404.19508, Arxiv},<br \/>\r\nyear  = {2024},<br \/>\r\ndate = {2024-08-09},<br \/>\r\nurldate = {2024-08-09},<br \/>\r\nbooktitle = {Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI 2024)},<br \/>\r\nabstract = {Modern graph representation learning works mostly under the assumption of dealing with regularly sampled temporal graph snapshots, which is far from realistic, e.g., social networks and physical systems are characterized by continuous dynamics and sporadic observations. To address this limitation, we introduce the Temporal Graph Ordinary Differential Equation (TG-ODE) framework, which learns both the temporal and spatial dynamics from graph streams where the intervals between observations are not regularly spaced. We empirically validate the proposed approach on several graph benchmarks, showing that TG-ODE can achieve state-of-the-art performance in irregular graph stream tasks.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {conference}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('261','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_261\" style=\"display:none;\"><div class=\"tp_abstract_entry\">Modern graph representation learning works mostly under the assumption of dealing with regularly sampled temporal graph snapshots, which is far from realistic, e.g., social networks and physical systems are characterized by continuous dynamics and sporadic observations. To address this limitation, we introduce the Temporal Graph Ordinary Differential Equation (TG-ODE) framework, which learns both the temporal and spatial dynamics from graph streams where the intervals between observations are not regularly spaced. We empirically validate the proposed approach on several graph benchmarks, showing that TG-ODE can achieve state-of-the-art performance in irregular graph stream tasks.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('261','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_261\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"ai ai-arxiv\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/arxiv.org\/abs\/2404.19508\" title=\"Arxiv\" target=\"_blank\">Arxiv<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('261','tp_links')\">Close<\/a><\/p><\/div><\/div><div class=\"tp_pub_image_right\"><img decoding=\"async\" name=\"Temporal Graph ODEs for Irregularly-Sampled Time Series\" src=\"https:\/\/pages.di.unipi.it\/bacciu\/wp-content\/uploads\/sites\/12\/2024\/01\/ijcai.png\" width=\"80\" alt=\"Temporal Graph ODEs for Irregularly-Sampled Time Series\" \/><\/div><\/div><div class=\"tp_publication tp_publication_inproceedings\"><div class=\"tp_pub_number\">2.<\/div><div class=\"tp_pub_info\"><p class=\"tp_pub_author\"> Massidda, Riccardo;  Magliacane, Sara;  Bacciu, Davide<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('260','tp_links')\" style=\"cursor:pointer;\">Learning Causal Abstractions of Linear Structural Causal Models<\/a> <span class=\"tp_pub_type inproceedings\">Proceedings Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_booktitle\">The 40th Conference on Uncertainty in Artificial Intelligence, <\/span><span class=\"tp_pub_additional_year\">2024<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_resource_link\"><a id=\"tp_links_sh_260\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('260','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_260\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('260','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_260\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@inproceedings{massidda2024learning,<br \/>\r\ntitle = {Learning Causal Abstractions of Linear Structural Causal Models},<br \/>\r\nauthor = {Riccardo Massidda and Sara Magliacane and Davide Bacciu},<br \/>\r\nurl = {https:\/\/openreview.net\/forum?id=XlFqI9TMhf},<br \/>\r\nyear  = {2024},<br \/>\r\ndate = {2024-07-31},<br \/>\r\nurldate = {2024-07-31},<br \/>\r\nbooktitle = {The 40th Conference on Uncertainty in Artificial Intelligence},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {inproceedings}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('260','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_260\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"fas fa-globe\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/openreview.net\/forum?id=XlFqI9TMhf\" title=\"https:\/\/openreview.net\/forum?id=XlFqI9TMhf\" target=\"_blank\">https:\/\/openreview.net\/forum?id=XlFqI9TMhf<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('260','tp_links')\">Close<\/a><\/p><\/div><\/div><div class=\"tp_pub_image_right\"><img decoding=\"async\" name=\"Learning Causal Abstractions of Linear Structural Causal Models\" src=\"https:\/\/pages.di.unipi.it\/bacciu\/wp-content\/uploads\/sites\/12\/2024\/08\/uai.png\" width=\"80\" alt=\"Learning Causal Abstractions of Linear Structural Causal Models\" \/><\/div><\/div><div class=\"tp_publication tp_publication_conference\"><div class=\"tp_pub_number\">3.<\/div><div class=\"tp_pub_info\"><p class=\"tp_pub_author\"> Gravina, Alessio;  Lovisotto, Giulia;  Gallicchio, Claudio;  Bacciu, Davide;  Grohnfeldt, Claas<\/p><p class=\"tp_pub_title\">Long Range Propagation on Continuous-Time Dynamic Graphs <span class=\"tp_pub_type conference\">Conference<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_booktitle\">Proceedings of the International Conference on Machine Learning (ICML 2024), <\/span><span class=\"tp_pub_additional_publisher\">PMLR, <\/span><span class=\"tp_pub_additional_year\">2024<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_258\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('258','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_258\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('258','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_258\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@conference{nokey,<br \/>\r\ntitle = {Long Range Propagation on Continuous-Time Dynamic Graphs},<br \/>\r\nauthor = {Alessio Gravina and Giulia Lovisotto and Claudio Gallicchio and Davide Bacciu and Claas Grohnfeldt},<br \/>\r\nyear  = {2024},<br \/>\r\ndate = {2024-07-24},<br \/>\r\nurldate = {2024-07-24},<br \/>\r\nbooktitle = {Proceedings of the International Conference on Machine Learning (ICML 2024)},<br \/>\r\npublisher = {PMLR},<br \/>\r\nabstract = {Learning Continuous-Time Dynamic Graphs (C-TDGs) requires accurately modeling spatio-temporal information on streams of irregularly sampled events. While many methods have been proposed recently, we find that most message passing-, recurrent- or self-attention-based methods perform poorly on long-range tasks. These tasks require correlating information that occurred \"far\" away from the current event, either spatially (higher-order node information) or along the time dimension (events occurred in the past). To address long-range dependencies, we introduce Continuous-Time Graph Anti-Symmetric Network (CTAN). Grounded within the ordinary differential equations framework, our method is designed for efficient propagation of information. In this paper, we show how CTAN's (i) long-range modeling capabilities are substantiated by theoretical findings and how (ii) its empirical performance on synthetic long-range benchmarks and real-world benchmarks is superior to other methods. Our results motivate CTAN's ability to propagate long-range information in C-TDGs as well as the inclusion of long-range tasks as part of temporal graph models evaluation.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {conference}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('258','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_258\" style=\"display:none;\"><div class=\"tp_abstract_entry\">Learning Continuous-Time Dynamic Graphs (C-TDGs) requires accurately modeling spatio-temporal information on streams of irregularly sampled events. While many methods have been proposed recently, we find that most message passing-, recurrent- or self-attention-based methods perform poorly on long-range tasks. These tasks require correlating information that occurred &quot;far&quot; away from the current event, either spatially (higher-order node information) or along the time dimension (events occurred in the past). To address long-range dependencies, we introduce Continuous-Time Graph Anti-Symmetric Network (CTAN). Grounded within the ordinary differential equations framework, our method is designed for efficient propagation of information. In this paper, we show how CTAN's (i) long-range modeling capabilities are substantiated by theoretical findings and how (ii) its empirical performance on synthetic long-range benchmarks and real-world benchmarks is superior to other methods. Our results motivate CTAN's ability to propagate long-range information in C-TDGs as well as the inclusion of long-range tasks as part of temporal graph models evaluation.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('258','tp_abstract')\">Close<\/a><\/p><\/div><\/div><div class=\"tp_pub_image_right\"><img decoding=\"async\" name=\"Long Range Propagation on Continuous-Time Dynamic Graphs\" src=\"https:\/\/pages.di.unipi.it\/bacciu\/wp-content\/uploads\/sites\/12\/2024\/01\/icml.png\" width=\"80\" alt=\"Long Range Propagation on Continuous-Time Dynamic Graphs\" \/><\/div><\/div><div class=\"tp_publication tp_publication_inproceedings\"><div class=\"tp_pub_number\">4.<\/div><div class=\"tp_pub_info\"><p class=\"tp_pub_author\"> Bacciu, Davide;  Landolfi, Francesco<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('269','tp_links')\" style=\"cursor:pointer;\">Generalizing Convolution to Point Clouds<\/a> <span class=\"tp_pub_type inproceedings\">Proceedings Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_booktitle\">ICML 2024 Workshop on Differentiable Almost Everything: Differentiable Relaxations, Algorithms, Operators, and Simulators, <\/span><span class=\"tp_pub_additional_year\">2024<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_269\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('269','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_269\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('269','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_269\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('269','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_269\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@inproceedings{bacciu2024generalizing,<br \/>\r\ntitle = {Generalizing Convolution to Point Clouds},<br \/>\r\nauthor = {Davide Bacciu and Francesco Landolfi},<br \/>\r\nurl = {https:\/\/openreview.net\/forum?id=TXwDtUmiaj},<br \/>\r\nyear  = {2024},<br \/>\r\ndate = {2024-07-23},<br \/>\r\nurldate = {2024-01-01},<br \/>\r\nbooktitle = {ICML 2024 Workshop on Differentiable Almost Everything: Differentiable Relaxations, Algorithms, Operators, and Simulators},<br \/>\r\nabstract = {Convolution, a fundamental operation in deep learning for structured grid data like images, cannot be directly applied to point clouds due to their irregular and unordered nature. Many approaches in literature that perform convolution on point clouds achieve this by designing a convolutional operator from scratch, often with little resemblance to the one used on images. We present two point cloud convolutions that naturally follow from the convolution in its standard definition popular with images. We do so by relaxing the indexing of the kernel weights with a \"soft\" dictionary that resembles the attention mechanism of the transformers. Finally, experimental results demonstrate the effectiveness of the proposed relaxations on two benchmark point cloud classification tasks.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {inproceedings}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('269','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_269\" style=\"display:none;\"><div class=\"tp_abstract_entry\">Convolution, a fundamental operation in deep learning for structured grid data like images, cannot be directly applied to point clouds due to their irregular and unordered nature. Many approaches in literature that perform convolution on point clouds achieve this by designing a convolutional operator from scratch, often with little resemblance to the one used on images. We present two point cloud convolutions that naturally follow from the convolution in its standard definition popular with images. We do so by relaxing the indexing of the kernel weights with a &quot;soft&quot; dictionary that resembles the attention mechanism of the transformers. Finally, experimental results demonstrate the effectiveness of the proposed relaxations on two benchmark point cloud classification tasks.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('269','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_269\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"fas fa-globe\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/openreview.net\/forum?id=TXwDtUmiaj\" title=\"https:\/\/openreview.net\/forum?id=TXwDtUmiaj\" target=\"_blank\">https:\/\/openreview.net\/forum?id=TXwDtUmiaj<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('269','tp_links')\">Close<\/a><\/p><\/div><\/div><div class=\"tp_pub_image_right\"><img decoding=\"async\" name=\"Generalizing Convolution to Point Clouds\" src=\"https:\/\/pages.di.unipi.it\/bacciu\/wp-content\/uploads\/sites\/12\/2024\/01\/icml.png\" width=\"80\" alt=\"Generalizing Convolution to Point Clouds\" \/><\/div><\/div><div class=\"tp_publication tp_publication_inproceedings\"><div class=\"tp_pub_number\">5.<\/div><div class=\"tp_pub_info\"><p class=\"tp_pub_author\"> Trenta, Alessandro;  Bacciu, Davide;  Cossu, Andrea;  Ferrero, Pietro<\/p><p class=\"tp_pub_title\">MultiSTOP: Solving Functional Equations with Reinforcement Learning <span class=\"tp_pub_type inproceedings\">Proceedings Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_booktitle\">ICLR 2024 Workshop on AI4DifferentialEquations In Science, <\/span><span class=\"tp_pub_additional_year\">2024<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_262\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('262','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_262\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('262','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_262\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@inproceedings{trenta2024multistop,<br \/>\r\ntitle = {MultiSTOP: Solving Functional Equations with Reinforcement Learning},<br \/>\r\nauthor = {Alessandro Trenta and Davide Bacciu and Andrea Cossu and Pietro Ferrero},<br \/>\r\nyear  = {2024},<br \/>\r\ndate = {2024-05-11},<br \/>\r\nurldate = {2024-05-11},<br \/>\r\nbooktitle = {ICLR 2024 Workshop on AI4DifferentialEquations In Science},<br \/>\r\nabstract = {We develop MultiSTOP, a Reinforcement Learning framework for solving functional equations in physics. This new methodology produces actual numerical solutions instead of bounds on them. We extend the original BootSTOP algorithm by adding multiple constraints derived from domain-specific knowledge, even in integral form, to improve the accuracy of the solution. We investigate a particular equation in a one-dimensional Conformal Field Theory.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {inproceedings}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('262','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_262\" style=\"display:none;\"><div class=\"tp_abstract_entry\">We develop MultiSTOP, a Reinforcement Learning framework for solving functional equations in physics. This new methodology produces actual numerical solutions instead of bounds on them. We extend the original BootSTOP algorithm by adding multiple constraints derived from domain-specific knowledge, even in integral form, to improve the accuracy of the solution. We investigate a particular equation in a one-dimensional Conformal Field Theory.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('262','tp_abstract')\">Close<\/a><\/p><\/div><\/div><div class=\"tp_pub_image_right\"><img decoding=\"async\" name=\"MultiSTOP: Solving Functional Equations with Reinforcement Learning\" src=\"https:\/\/pages.di.unipi.it\/bacciu\/wp-content\/uploads\/sites\/12\/2024\/01\/iclr.png\" width=\"80\" alt=\"MultiSTOP: Solving Functional Equations with Reinforcement Learning\" \/><\/div><\/div><div class=\"tp_publication tp_publication_conference\"><div class=\"tp_pub_number\">6.<\/div><div class=\"tp_pub_info\"><p class=\"tp_pub_author\"> Massidda, Martina Cinquini Francesco Landolfi Riccardo<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('254','tp_links')\" style=\"cursor:pointer;\">Constraint-Free Structure Learning with Smooth Acyclic Orientations<\/a> <span class=\"tp_pub_type conference\">Conference<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_booktitle\">The Twelfth International Conference on Learning Representations, <\/span><span class=\"tp_pub_additional_year\">2024<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_254\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('254','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_254\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('254','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_254\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('254','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_254\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@conference{cosmo2024,<br \/>\r\ntitle = {Constraint-Free Structure Learning with Smooth Acyclic Orientations},<br \/>\r\nauthor = {Martina Cinquini Francesco Landolfi Riccardo Massidda},<br \/>\r\nurl = {https:\/\/openreview.net\/forum?id=KWO8LSUC5W},<br \/>\r\nyear  = {2024},<br \/>\r\ndate = {2024-05-06},<br \/>\r\nurldate = {2024-01-01},<br \/>\r\nbooktitle = {The Twelfth International Conference on Learning Representations},<br \/>\r\nabstract = {The structure learning problem consists of fitting data generated by a Directed Acyclic Graph (DAG) to correctly reconstruct its arcs. In this context, differentiable approaches constrain or regularize an optimization problem with a continuous relaxation of the acyclicity property. The computational cost of evaluating graph acyclicity is cubic on the number of nodes and significantly affects scalability. In this paper, we introduce COSMO, a constraint-free continuous optimization scheme for acyclic structure learning. At the core of our method lies a novel differentiable approximation of an orientation matrix parameterized by a single priority vector. Differently from previous works, our parameterization fits a smooth orientation matrix and the resulting acyclic adjacency matrix without evaluating acyclicity at any step. Despite this absence, we prove that COSMO always converges to an acyclic solution. In addition to being asymptotically faster, our empirical analysis highlights how COSMO performance on graph reconstruction compares favorably with competing structure learning methods.<br \/>\r\n},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {conference}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('254','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_254\" style=\"display:none;\"><div class=\"tp_abstract_entry\">The structure learning problem consists of fitting data generated by a Directed Acyclic Graph (DAG) to correctly reconstruct its arcs. In this context, differentiable approaches constrain or regularize an optimization problem with a continuous relaxation of the acyclicity property. The computational cost of evaluating graph acyclicity is cubic on the number of nodes and significantly affects scalability. In this paper, we introduce COSMO, a constraint-free continuous optimization scheme for acyclic structure learning. At the core of our method lies a novel differentiable approximation of an orientation matrix parameterized by a single priority vector. Differently from previous works, our parameterization fits a smooth orientation matrix and the resulting acyclic adjacency matrix without evaluating acyclicity at any step. Despite this absence, we prove that COSMO always converges to an acyclic solution. In addition to being asymptotically faster, our empirical analysis highlights how COSMO performance on graph reconstruction compares favorably with competing structure learning methods.<br \/>\r\n<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('254','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_254\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"fas fa-globe\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/openreview.net\/forum?id=KWO8LSUC5W\" title=\"https:\/\/openreview.net\/forum?id=KWO8LSUC5W\" target=\"_blank\">https:\/\/openreview.net\/forum?id=KWO8LSUC5W<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('254','tp_links')\">Close<\/a><\/p><\/div><\/div><div class=\"tp_pub_image_right\"><img decoding=\"async\" name=\"Constraint-Free Structure Learning with Smooth Acyclic Orientations\" src=\"http:\/\/pages.di.unipi.it\/bacciu\/wp-content\/uploads\/sites\/12\/2024\/01\/iclr.png\" width=\"80\" alt=\"Constraint-Free Structure Learning with Smooth Acyclic Orientations\" \/><\/div><\/div><div class=\"tp_publication tp_publication_conference\"><div class=\"tp_pub_number\">7.<\/div><div class=\"tp_pub_info\"><p class=\"tp_pub_author\"> Pasquali, Alex;  Lomonaco, Vincenzo;  Bacciu, Davide;  Paganelli, Federica<\/p><p class=\"tp_pub_title\">Deep Reinforcement Learning for Network Slice Placement and the DeepNetSlice Toolkit <span class=\"tp_pub_type conference\">Conference<\/span> <span class=\"tp_pub_label_status forthcoming\">Forthcoming<\/span><\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_booktitle\">Proceedings of the IEEE International Conference on Machine Learning for Communication and Networking 2024 (IEEE ICMLCN 2024), <\/span><span class=\"tp_pub_additional_publisher\">IEEE, <\/span>Forthcoming.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_251\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('251','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_251\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@conference{nokey,<br \/>\r\ntitle = {Deep Reinforcement Learning for Network Slice Placement and the DeepNetSlice Toolkit},<br \/>\r\nauthor = {Alex Pasquali and Vincenzo Lomonaco and Davide Bacciu and Federica Paganelli},<br \/>\r\nyear  = {2024},<br \/>\r\ndate = {2024-05-05},<br \/>\r\nurldate = {2024-05-05},<br \/>\r\nbooktitle = {Proceedings of the IEEE International Conference on Machine Learning for Communication and Networking 2024 (IEEE ICMLCN 2024)},<br \/>\r\npublisher = {IEEE},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {forthcoming},<br \/>\r\ntppubtype = {conference}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('251','tp_bibtex')\">Close<\/a><\/p><\/div><\/div><div class=\"tp_pub_image_right\"><img decoding=\"async\" name=\"Deep Reinforcement Learning for Network Slice Placement and the DeepNetSlice Toolkit\" src=\"http:\/\/pages.di.unipi.it\/bacciu\/wp-content\/uploads\/sites\/12\/2024\/01\/ICMLCN-Logo-164x70-1.png\" width=\"80\" alt=\"Deep Reinforcement Learning for Network Slice Placement and the DeepNetSlice Toolkit\" \/><\/div><\/div><div class=\"tp_publication tp_publication_workshop\"><div class=\"tp_pub_number\">8.<\/div><div class=\"tp_pub_info\"><p class=\"tp_pub_author\"> Ninniri, Matteo;  Podda, Marco;  Bacciu, Davide<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('250','tp_links')\" style=\"cursor:pointer;\">Classifier-free graph diffusion for molecular property targeting<\/a> <span class=\"tp_pub_type workshop\">Workshop<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_booktitle\">4th workshop on Graphs and more Complex structures for Learning and Reasoning (GCLR) at AAAI 2024, <\/span><span class=\"tp_pub_additional_year\">2024<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_250\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('250','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_250\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('250','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_250\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('250','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_250\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@workshop{Ninniri2024,<br \/>\r\ntitle = {Classifier-free graph diffusion for molecular property targeting},<br \/>\r\nauthor = {Matteo Ninniri and Marco Podda and Davide Bacciu},<br \/>\r\nurl = {https:\/\/arxiv.org\/abs\/2312.17397, Arxiv},<br \/>\r\nyear  = {2024},<br \/>\r\ndate = {2024-02-27},<br \/>\r\nbooktitle = {4th workshop on Graphs and more Complex structures for Learning and Reasoning (GCLR) at AAAI 2024},<br \/>\r\nabstract = {This work focuses on the task of property targeting: that is, generating molecules conditioned on target chemical properties to expedite candidate screening for novel drug and materials development. DiGress is a recent diffusion model for molecular graphs whose distinctive feature is allowing property targeting through classifier-based (CB) guidance. While CB guidance may work to generate molecular-like graphs, we hint at the fact that its assumptions apply poorly to the chemical domain. Based on this insight we propose a classifier-free DiGress (FreeGress), which works by directly injecting the conditioning information into the training process. CF guidance is convenient given its less stringent assumptions and since it does not require to train an auxiliary property regressor, thus halving the number of trainable parameters in the model. We empirically show that our model yields up to 79% improvement in Mean Absolute Error with respect to DiGress on property targeting tasks on QM9 and ZINC-250k benchmarks. As an additional contribution, we propose a simple yet powerful approach to improve chemical validity of generated samples, based on the observation that certain chemical properties such as molecular weight correlate with the number of atoms in molecules. },<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {workshop}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('250','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_250\" style=\"display:none;\"><div class=\"tp_abstract_entry\">This work focuses on the task of property targeting: that is, generating molecules conditioned on target chemical properties to expedite candidate screening for novel drug and materials development. DiGress is a recent diffusion model for molecular graphs whose distinctive feature is allowing property targeting through classifier-based (CB) guidance. While CB guidance may work to generate molecular-like graphs, we hint at the fact that its assumptions apply poorly to the chemical domain. Based on this insight we propose a classifier-free DiGress (FreeGress), which works by directly injecting the conditioning information into the training process. CF guidance is convenient given its less stringent assumptions and since it does not require to train an auxiliary property regressor, thus halving the number of trainable parameters in the model. We empirically show that our model yields up to 79% improvement in Mean Absolute Error with respect to DiGress on property targeting tasks on QM9 and ZINC-250k benchmarks. As an additional contribution, we propose a simple yet powerful approach to improve chemical validity of generated samples, based on the observation that certain chemical properties such as molecular weight correlate with the number of atoms in molecules. <\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('250','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_250\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"ai ai-arxiv\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/arxiv.org\/abs\/2312.17397\" title=\"Arxiv\" target=\"_blank\">Arxiv<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('250','tp_links')\">Close<\/a><\/p><\/div><\/div><div class=\"tp_pub_image_right\"><img decoding=\"async\" name=\"Classifier-free graph diffusion for molecular property targeting\" src=\"http:\/\/pages.di.unipi.it\/bacciu\/wp-content\/uploads\/sites\/12\/2024\/01\/aaai.jpeg\" width=\"80\" alt=\"Classifier-free graph diffusion for molecular property targeting\" \/><\/div><\/div><div class=\"tp_publication tp_publication_inproceedings\"><div class=\"tp_pub_number\">9.<\/div><div class=\"tp_pub_info\"><p class=\"tp_pub_author\"> Simone, Lorenzo;  Bacciu, Davide;  Gervasi, Vincenzo<\/p><p class=\"tp_pub_title\">Quasi-Orthogonal ECG-Frank XYZ Transformation with\u00a0Energy-Based Models and\u00a0Clinical Text <span class=\"tp_pub_type inproceedings\">Proceedings Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span> Finkelstein, Joseph;  Moskovitch, Robert;  Parimbelli, Enea (Ed.): <span class=\"tp_pub_additional_booktitle\">Artificial Intelligence in Medicine, <\/span><span class=\"tp_pub_additional_pages\">pp. 249\u2013253, <\/span><span class=\"tp_pub_additional_publisher\">Springer Nature Switzerland, <\/span><span class=\"tp_pub_additional_address\">Cham, <\/span><span class=\"tp_pub_additional_year\">2024<\/span>, <span class=\"tp_pub_additional_isbn\">ISBN: 978-3-031-66535-6<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_256\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('256','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_256\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('256','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_256\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@inproceedings{10.1007\/978-3-031-66535-6_27,<br \/>\r\ntitle = {Quasi-Orthogonal ECG-Frank XYZ Transformation with\u00a0Energy-Based Models and\u00a0Clinical Text},<br \/>\r\nauthor = {Lorenzo Simone and Davide Bacciu and Vincenzo Gervasi},<br \/>\r\neditor = {Joseph Finkelstein and Robert Moskovitch and Enea Parimbelli},<br \/>\r\nisbn = {978-3-031-66535-6},<br \/>\r\nyear  = {2024},<br \/>\r\ndate = {2024-01-01},<br \/>\r\nbooktitle = {Artificial Intelligence in Medicine},<br \/>\r\npages = {249\u2013253},<br \/>\r\npublisher = {Springer Nature Switzerland},<br \/>\r\naddress = {Cham},<br \/>\r\nabstract = {The transformation of 12-Lead electrocardiograms to 3D vectorcardiograms, along with its reverse process, offer numerous advantages for computer visualization, signal transmission and analysis. Recent literature has shown increasing interest in this structured representation, due to its effectiveness in various cardiac evaluations and machine learning-based arrhythmia prediction. Current transformation techniques utilize fixed matrices, often retrieved through regression methods which fail to correlate with patient's physical characteristics or ongoing diseases. In this paper, we propose the first quasi-orthogonal transformation handling multi-modal input (12-lead ECG and clinical annotations) through a conditional energy-based model. Within our novel probabilistic formulation, the model proposes multiple transformation coefficients without relying on a single fixed approximation to better highlight relationships between latent factors and structured output. The evaluation of our approach, conducted with a nested cross validation on PTB Diagnostic dataset, showcased improved reconstruction precision across various cardiac conditions compared to state-of-the-art techniques (Kors, Dower, and QSLV).},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {inproceedings}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('256','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_256\" style=\"display:none;\"><div class=\"tp_abstract_entry\">The transformation of 12-Lead electrocardiograms to 3D vectorcardiograms, along with its reverse process, offer numerous advantages for computer visualization, signal transmission and analysis. Recent literature has shown increasing interest in this structured representation, due to its effectiveness in various cardiac evaluations and machine learning-based arrhythmia prediction. Current transformation techniques utilize fixed matrices, often retrieved through regression methods which fail to correlate with patient's physical characteristics or ongoing diseases. In this paper, we propose the first quasi-orthogonal transformation handling multi-modal input (12-lead ECG and clinical annotations) through a conditional energy-based model. Within our novel probabilistic formulation, the model proposes multiple transformation coefficients without relying on a single fixed approximation to better highlight relationships between latent factors and structured output. The evaluation of our approach, conducted with a nested cross validation on PTB Diagnostic dataset, showcased improved reconstruction precision across various cardiac conditions compared to state-of-the-art techniques (Kors, Dower, and QSLV).<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('256','tp_abstract')\">Close<\/a><\/p><\/div><\/div><div class=\"tp_pub_image_right\"><\/div><\/div><div class=\"tp_publication tp_publication_article\"><div class=\"tp_pub_number\">10.<\/div><div class=\"tp_pub_info\"><p class=\"tp_pub_author\"> Carta, Antonio;  Cossu, Andrea;  Lomonaco, Vincenzo;  Bacciu, Davide;  Weijer, Joost<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('257','tp_links')\" style=\"cursor:pointer;\">Projected Latent Distillation for Data-Agnostic Consolidation in distributed continual learning<\/a> <span class=\"tp_pub_type article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">Neurocomputing, <\/span><span class=\"tp_pub_additional_volume\">vol. 598, <\/span><span class=\"tp_pub_additional_pages\">pp. 127935, <\/span><span class=\"tp_pub_additional_year\">2024<\/span>, <span class=\"tp_pub_additional_issn\">ISSN: 0925-2312<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_257\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('257','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_257\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('257','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_257\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('257','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_257\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{CARTA2024127935,<br \/>\r\ntitle = {Projected Latent Distillation for Data-Agnostic Consolidation in distributed continual learning},<br \/>\r\nauthor = {Antonio Carta and Andrea Cossu and Vincenzo Lomonaco and Davide Bacciu and Joost Weijer},<br \/>\r\nurl = {https:\/\/www.sciencedirect.com\/science\/article\/pii\/S0925231224007069},<br \/>\r\ndoi = {https:\/\/doi.org\/10.1016\/j.neucom.2024.127935},<br \/>\r\nissn = {0925-2312},<br \/>\r\nyear  = {2024},<br \/>\r\ndate = {2024-01-01},<br \/>\r\nurldate = {2024-01-01},<br \/>\r\njournal = {Neurocomputing},<br \/>\r\nvolume = {598},<br \/>\r\npages = {127935},<br \/>\r\nabstract = {In continual learning applications on-the-edge multiple self-centered devices (SCD) learn different local tasks independently, with each SCD only optimizing its own task. Can we achieve (almost) zero-cost collaboration between different devices? We formalize this problem as a Distributed Continual Learning (DCL) scenario, where SCDs greedily adapt to their own local tasks and a separate continual learning (CL) model perform a sparse and asynchronous consolidation step that combines the SCD models sequentially into a single multi-task model without using the original data. Unfortunately, current CL methods are not directly applicable to this scenario. We propose Data-Agnostic Consolidation (DAC), a novel double knowledge distillation method which performs distillation in the latent space via a novel Projected Latent Distillation loss. Experimental results show that DAC enables forward transfer between SCDs and reaches state-of-the-art accuracy on Split CIFAR100, CORe50 and Split TinyImageNet, both in single device and distributed CL scenarios. Somewhat surprisingly, a single out-of-distribution image is sufficient as the only source of data for DAC.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('257','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_257\" style=\"display:none;\"><div class=\"tp_abstract_entry\">In continual learning applications on-the-edge multiple self-centered devices (SCD) learn different local tasks independently, with each SCD only optimizing its own task. Can we achieve (almost) zero-cost collaboration between different devices? We formalize this problem as a Distributed Continual Learning (DCL) scenario, where SCDs greedily adapt to their own local tasks and a separate continual learning (CL) model perform a sparse and asynchronous consolidation step that combines the SCD models sequentially into a single multi-task model without using the original data. Unfortunately, current CL methods are not directly applicable to this scenario. We propose Data-Agnostic Consolidation (DAC), a novel double knowledge distillation method which performs distillation in the latent space via a novel Projected Latent Distillation loss. Experimental results show that DAC enables forward transfer between SCDs and reaches state-of-the-art accuracy on Split CIFAR100, CORe50 and Split TinyImageNet, both in single device and distributed CL scenarios. Somewhat surprisingly, a single out-of-distribution image is sufficient as the only source of data for DAC.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('257','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_257\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"fas fa-globe\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/www.sciencedirect.com\/science\/article\/pii\/S0925231224007069\" title=\"https:\/\/www.sciencedirect.com\/science\/article\/pii\/S0925231224007069\" target=\"_blank\">https:\/\/www.sciencedirect.com\/science\/article\/pii\/S0925231224007069<\/a><\/li><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/https:\/\/doi.org\/10.1016\/j.neucom.2024.127935\" title=\"Follow DOI:https:\/\/doi.org\/10.1016\/j.neucom.2024.127935\" target=\"_blank\">doi:https:\/\/doi.org\/10.1016\/j.neucom.2024.127935<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('257','tp_links')\">Close<\/a><\/p><\/div><\/div><div class=\"tp_pub_image_right\"><img decoding=\"async\" name=\"Projected Latent Distillation for Data-Agnostic Consolidation in distributed continual learning\" src=\"https:\/\/pages.di.unipi.it\/bacciu\/wp-content\/uploads\/sites\/12\/2024\/01\/neurocomputing.png\" width=\"80\" alt=\"Projected Latent Distillation for Data-Agnostic Consolidation in distributed continual learning\" \/><\/div><\/div><div class=\"tp_publication tp_publication_article\"><div class=\"tp_pub_number\">11.<\/div><div class=\"tp_pub_info\"><p class=\"tp_pub_author\"> Cossu, Andrea;  Spinnato, Francesco;  Guidotti, Riccardo;  Bacciu, Davide<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('259','tp_links')\" style=\"cursor:pointer;\">Drifting explanations in continual learning<\/a> <span class=\"tp_pub_type article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">Neurocomputing, <\/span><span class=\"tp_pub_additional_volume\">vol. 597, <\/span><span class=\"tp_pub_additional_pages\">pp. 127960, <\/span><span class=\"tp_pub_additional_year\">2024<\/span>, <span class=\"tp_pub_additional_issn\">ISSN: 0925-2312<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_259\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('259','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_259\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('259','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_259\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('259','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_259\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{COSSU2024127960,<br \/>\r\ntitle = {Drifting explanations in continual learning},<br \/>\r\nauthor = {Andrea Cossu and Francesco Spinnato and Riccardo Guidotti and Davide Bacciu},<br \/>\r\nurl = {https:\/\/www.sciencedirect.com\/science\/article\/pii\/S0925231224007318},<br \/>\r\ndoi = {https:\/\/doi.org\/10.1016\/j.neucom.2024.127960},<br \/>\r\nissn = {0925-2312},<br \/>\r\nyear  = {2024},<br \/>\r\ndate = {2024-01-01},<br \/>\r\nurldate = {2024-01-01},<br \/>\r\njournal = {Neurocomputing},<br \/>\r\nvolume = {597},<br \/>\r\npages = {127960},<br \/>\r\nabstract = {Continual Learning (CL) trains models on streams of data, with the aim of learning new information without forgetting previous knowledge. However, many of these models lack interpretability, making it difficult to understand or explain how they make decisions. This lack of interpretability becomes even more challenging given the non-stationary nature of the data streams in CL. Furthermore, CL strategies aimed at mitigating forgetting directly impact the learned representations. We study the behavior of different explanation methods in CL and propose CLEX (ContinuaL EXplanations), an evaluation protocol to robustly assess the change of explanations in Class-Incremental scenarios, where forgetting is pronounced. We observed that models with similar predictive accuracy do not generate similar explanations. Replay-based strategies, well-known to be some of the most effective ones in class-incremental scenarios, are able to generate explanations that are aligned to the ones of a model trained offline. On the contrary, naive fine-tuning often results in degenerate explanations that drift from the ones of an offline model. Finally, we discovered that even replay strategies do not always operate at best when applied to fully-trained recurrent models. Instead, randomized recurrent models (leveraging on an untrained recurrent component) clearly reduce the drift of the explanations. This discrepancy between fully-trained and randomized recurrent models, previously known only in the context of their predictive continual performance, is more general, including also continual explanations.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('259','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_259\" style=\"display:none;\"><div class=\"tp_abstract_entry\">Continual Learning (CL) trains models on streams of data, with the aim of learning new information without forgetting previous knowledge. However, many of these models lack interpretability, making it difficult to understand or explain how they make decisions. This lack of interpretability becomes even more challenging given the non-stationary nature of the data streams in CL. Furthermore, CL strategies aimed at mitigating forgetting directly impact the learned representations. We study the behavior of different explanation methods in CL and propose CLEX (ContinuaL EXplanations), an evaluation protocol to robustly assess the change of explanations in Class-Incremental scenarios, where forgetting is pronounced. We observed that models with similar predictive accuracy do not generate similar explanations. Replay-based strategies, well-known to be some of the most effective ones in class-incremental scenarios, are able to generate explanations that are aligned to the ones of a model trained offline. On the contrary, naive fine-tuning often results in degenerate explanations that drift from the ones of an offline model. Finally, we discovered that even replay strategies do not always operate at best when applied to fully-trained recurrent models. Instead, randomized recurrent models (leveraging on an untrained recurrent component) clearly reduce the drift of the explanations. This discrepancy between fully-trained and randomized recurrent models, previously known only in the context of their predictive continual performance, is more general, including also continual explanations.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('259','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_259\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"fas fa-globe\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/www.sciencedirect.com\/science\/article\/pii\/S0925231224007318\" title=\"https:\/\/www.sciencedirect.com\/science\/article\/pii\/S0925231224007318\" target=\"_blank\">https:\/\/www.sciencedirect.com\/science\/article\/pii\/S0925231224007318<\/a><\/li><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/https:\/\/doi.org\/10.1016\/j.neucom.2024.127960\" title=\"Follow DOI:https:\/\/doi.org\/10.1016\/j.neucom.2024.127960\" target=\"_blank\">doi:https:\/\/doi.org\/10.1016\/j.neucom.2024.127960<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('259','tp_links')\">Close<\/a><\/p><\/div><\/div><div class=\"tp_pub_image_right\"><img decoding=\"async\" name=\"Drifting explanations in continual learning\" src=\"https:\/\/pages.di.unipi.it\/bacciu\/wp-content\/uploads\/sites\/12\/2024\/01\/neurocomputing.png\" width=\"80\" alt=\"Drifting explanations in continual learning\" \/><\/div><\/div><div class=\"tp_publication tp_publication_inproceedings\"><div class=\"tp_pub_number\">12.<\/div><div class=\"tp_pub_info\"><p class=\"tp_pub_author\"> Ceni, Andrea;  Cossu, Andrea;  St\u00f6lzle, Maximilian W;  Liu, Jingyue;  Santina, Cosimo Della;  Bacciu, Davide;  Gallicchio, Claudio<\/p><p class=\"tp_pub_title\">Random Oscillators Network for Time Series Processing <span class=\"tp_pub_type inproceedings\">Proceedings Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_booktitle\">International Conference on Artificial Intelligence and Statistics, <\/span><span class=\"tp_pub_additional_pages\">pp. 4807\u20134815, <\/span><span class=\"tp_pub_additional_organization\">PMLR <\/span><span class=\"tp_pub_additional_year\">2024<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_263\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('263','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_263\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('263','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_263\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@inproceedings{ceni2024random,<br \/>\r\ntitle = {Random Oscillators Network for Time Series Processing},<br \/>\r\nauthor = {Andrea Ceni and Andrea Cossu and Maximilian W St\u00f6lzle and Jingyue Liu and Cosimo Della Santina and Davide Bacciu and Claudio Gallicchio},<br \/>\r\nyear  = {2024},<br \/>\r\ndate = {2024-01-01},<br \/>\r\nurldate = {2024-01-01},<br \/>\r\nbooktitle = {International Conference on Artificial Intelligence and Statistics},<br \/>\r\npages = {4807\u20134815},<br \/>\r\norganization = {PMLR},<br \/>\r\nabstract = {We introduce the Random Oscillators Network (RON), a physically-inspired recurrent model derived from a network of heterogeneous oscillators. Unlike traditional recurrent neural networks, RON keeps the connections between oscillators untrained by leveraging on smart random initialisations, leading to exceptional computational efficiency. A rigorous theoretical analysis finds the necessary and sufficient conditions for the stability of RON, highlighting the natural tendency of RON to lie at the edge of stability, a regime of configurations offering particularly powerful and expressive models. Through an extensive empirical evaluation on several benchmarks, we show four main advantages of RON. 1) RON shows excellent long-term memory and sequence classification ability, outperforming other randomised approaches. 2) RON outperforms fully-trained recurrent models and state-of-the-art randomised models in chaotic time series forecasting. 3) RON provides expressive internal representations even in a small parametrisation regime making it amenable to be deployed on low-powered devices and at the edge. 4) RON is up to two orders of magnitude faster than fully-trained models. },<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {inproceedings}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('263','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_263\" style=\"display:none;\"><div class=\"tp_abstract_entry\">We introduce the Random Oscillators Network (RON), a physically-inspired recurrent model derived from a network of heterogeneous oscillators. Unlike traditional recurrent neural networks, RON keeps the connections between oscillators untrained by leveraging on smart random initialisations, leading to exceptional computational efficiency. A rigorous theoretical analysis finds the necessary and sufficient conditions for the stability of RON, highlighting the natural tendency of RON to lie at the edge of stability, a regime of configurations offering particularly powerful and expressive models. Through an extensive empirical evaluation on several benchmarks, we show four main advantages of RON. 1) RON shows excellent long-term memory and sequence classification ability, outperforming other randomised approaches. 2) RON outperforms fully-trained recurrent models and state-of-the-art randomised models in chaotic time series forecasting. 3) RON provides expressive internal representations even in a small parametrisation regime making it amenable to be deployed on low-powered devices and at the edge. 4) RON is up to two orders of magnitude faster than fully-trained models. <\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('263','tp_abstract')\">Close<\/a><\/p><\/div><\/div><div class=\"tp_pub_image_right\"><img decoding=\"async\" name=\"Random Oscillators Network for Time Series Processing\" src=\"https:\/\/pages.di.unipi.it\/bacciu\/wp-content\/uploads\/sites\/12\/2024\/01\/aistats.jpg\" width=\"80\" alt=\"Random Oscillators Network for Time Series Processing\" \/><\/div><\/div><div class=\"tp_publication tp_publication_inproceedings\"><div class=\"tp_pub_number\">13.<\/div><div class=\"tp_pub_info\"><p class=\"tp_pub_author\"> Li, Lanpei;  Piccoli, Elia;  Cossu, Andrea;  Bacciu, Davide;  Lomonaco, Vincenzo<\/p><p class=\"tp_pub_title\">Calibration of Continual Learning Models <span class=\"tp_pub_type inproceedings\">Proceedings Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_booktitle\">Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, <\/span><span class=\"tp_pub_additional_pages\">pp. 4160\u20134169, <\/span><span class=\"tp_pub_additional_year\">2024<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_265\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('265','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_265\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('265','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_265\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@inproceedings{li2024calibration,<br \/>\r\ntitle = {Calibration of Continual Learning Models},<br \/>\r\nauthor = {Lanpei Li and Elia Piccoli and Andrea Cossu and Davide Bacciu and Vincenzo Lomonaco},<br \/>\r\nyear  = {2024},<br \/>\r\ndate = {2024-01-01},<br \/>\r\nurldate = {2024-01-01},<br \/>\r\nbooktitle = {Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition},<br \/>\r\npages = {4160\u20134169},<br \/>\r\nabstract = {Continual Learning (CL) focuses on maximizing the predictive performance of a model across a non-stationary stream of data. Unfortunately CL models tend to forget previous knowledge thus often underperforming when compared with an offline model trained jointly on the entire data stream. Given that any CL model will eventually make mistakes it is of crucial importance to build calibrated CL models: models that can reliably tell their confidence when making a prediction. Model calibration is an active research topic in machine learning yet to be properly investigated in CL. We provide the first empirical study of the behavior of calibration approaches in CL showing that CL strategies do not inherently learn calibrated models. To mitigate this issue we design a continual calibration approach that improves the performance of post-processing calibration methods over a wide range of different benchmarks and CL strategies. CL does not necessarily need perfect predictive models but rather it can benefit from reliable predictive models. We believe our study on continual calibration represents a first step towards this direction.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {inproceedings}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('265','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_265\" style=\"display:none;\"><div class=\"tp_abstract_entry\">Continual Learning (CL) focuses on maximizing the predictive performance of a model across a non-stationary stream of data. Unfortunately CL models tend to forget previous knowledge thus often underperforming when compared with an offline model trained jointly on the entire data stream. Given that any CL model will eventually make mistakes it is of crucial importance to build calibrated CL models: models that can reliably tell their confidence when making a prediction. Model calibration is an active research topic in machine learning yet to be properly investigated in CL. We provide the first empirical study of the behavior of calibration approaches in CL showing that CL strategies do not inherently learn calibrated models. To mitigate this issue we design a continual calibration approach that improves the performance of post-processing calibration methods over a wide range of different benchmarks and CL strategies. CL does not necessarily need perfect predictive models but rather it can benefit from reliable predictive models. We believe our study on continual calibration represents a first step towards this direction.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('265','tp_abstract')\">Close<\/a><\/p><\/div><\/div><div class=\"tp_pub_image_right\"><img decoding=\"async\" name=\"Calibration of Continual Learning Models\" src=\"https:\/\/pages.di.unipi.it\/bacciu\/wp-content\/uploads\/sites\/12\/2024\/01\/cvpr.jpg\" width=\"80\" alt=\"Calibration of Continual Learning Models\" \/><\/div><\/div><div class=\"tp_publication tp_publication_article\"><div class=\"tp_pub_number\">14.<\/div><div class=\"tp_pub_info\"><p class=\"tp_pub_author\"> Gravina, Alessio;  Bacciu, Davide<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('266','tp_links')\" style=\"cursor:pointer;\">Deep Learning for Dynamic Graphs: Models and Benchmarks<\/a> <span class=\"tp_pub_type article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">IEEE Transactions on Neural Networks and Learning Systems, <\/span><span class=\"tp_pub_additional_pages\">pp. 1-14, <\/span><span class=\"tp_pub_additional_year\">2024<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_266\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('266','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_266\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('266','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_266\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('266','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_266\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{10490120,<br \/>\r\ntitle = {Deep Learning for Dynamic Graphs: Models and Benchmarks},<br \/>\r\nauthor = {Alessio Gravina and Davide Bacciu},<br \/>\r\ndoi = {10.1109\/TNNLS.2024.3379735},<br \/>\r\nyear  = {2024},<br \/>\r\ndate = {2024-01-01},<br \/>\r\nurldate = {2024-01-01},<br \/>\r\njournal = {IEEE Transactions on Neural Networks and Learning Systems},<br \/>\r\npages = {1-14},<br \/>\r\nabstract = {Recent progress in research on deep graph networks (DGNs) has led to a maturation of the domain of learning on graphs. Despite the growth of this research field, there are still important challenges that are yet unsolved. Specifically, there is an urge of making DGNs suitable for predictive tasks on real-world systems of interconnected entities, which evolve over time. With the aim of fostering research in the domain of dynamic graphs, first, we survey recent advantages in learning both temporal and spatial information, providing a comprehensive overview of the current state-of-the-art in the domain of representation learning for dynamic graphs. Second, we conduct a fair performance comparison among the most popular proposed approaches on node-and edge-level tasks, leveraging rigorous model selection and assessment for all the methods, thus establishing a sound baseline for evaluating new architectures and approaches.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('266','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_266\" style=\"display:none;\"><div class=\"tp_abstract_entry\">Recent progress in research on deep graph networks (DGNs) has led to a maturation of the domain of learning on graphs. Despite the growth of this research field, there are still important challenges that are yet unsolved. Specifically, there is an urge of making DGNs suitable for predictive tasks on real-world systems of interconnected entities, which evolve over time. With the aim of fostering research in the domain of dynamic graphs, first, we survey recent advantages in learning both temporal and spatial information, providing a comprehensive overview of the current state-of-the-art in the domain of representation learning for dynamic graphs. Second, we conduct a fair performance comparison among the most popular proposed approaches on node-and edge-level tasks, leveraging rigorous model selection and assessment for all the methods, thus establishing a sound baseline for evaluating new architectures and approaches.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('266','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_266\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.1109\/TNNLS.2024.3379735\" title=\"Follow DOI:10.1109\/TNNLS.2024.3379735\" target=\"_blank\">doi:10.1109\/TNNLS.2024.3379735<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('266','tp_links')\">Close<\/a><\/p><\/div><\/div><div class=\"tp_pub_image_right\"><img decoding=\"async\" name=\"Deep Learning for Dynamic Graphs: Models and Benchmarks\" src=\"https:\/\/pages.di.unipi.it\/bacciu\/wp-content\/uploads\/sites\/12\/2024\/01\/tnnls.jpg\" width=\"80\" alt=\"Deep Learning for Dynamic Graphs: Models and Benchmarks\" \/><\/div><\/div><div class=\"tp_publication tp_publication_article\"><div class=\"tp_pub_number\">15.<\/div><div class=\"tp_pub_info\"><p class=\"tp_pub_author\"> Zhang, Kun;  Shpitser, Ilya;  Magliacane, Sara;  Bacciu, Davide;  Wu, Fei;  Zhang, Changshui;  Spirtes, Peter<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('267','tp_links')\" style=\"cursor:pointer;\">IEEE Transactions on Neural Networks and Learning Systems Special Issue on Causal Discovery and Causality-Inspired Machine Learning<\/a> <span class=\"tp_pub_type article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">IEEE Transactions on Neural Networks and Learning Systems, <\/span><span class=\"tp_pub_additional_volume\">vol. 35, <\/span><span class=\"tp_pub_additional_number\">no. 4, <\/span><span class=\"tp_pub_additional_pages\">pp. 4899-4901, <\/span><span class=\"tp_pub_additional_year\">2024<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_267\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('267','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_267\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('267','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_267\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('267','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_267\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{10492646,<br \/>\r\ntitle = {IEEE Transactions on Neural Networks and Learning Systems Special Issue on Causal Discovery and Causality-Inspired Machine Learning},<br \/>\r\nauthor = {Kun Zhang and Ilya Shpitser and Sara Magliacane and Davide Bacciu and Fei Wu and Changshui Zhang and Peter Spirtes},<br \/>\r\ndoi = {10.1109\/TNNLS.2024.3365968},<br \/>\r\nyear  = {2024},<br \/>\r\ndate = {2024-01-01},<br \/>\r\nurldate = {2024-01-01},<br \/>\r\njournal = {IEEE Transactions on Neural Networks and Learning Systems},<br \/>\r\nvolume = {35},<br \/>\r\nnumber = {4},<br \/>\r\npages = {4899-4901},<br \/>\r\nabstract = {Causality is a fundamental notion in science and engineering. It has attracted much interest across research communities in statistics, machine learning (ML), healthcare, and artificial intelligence (AI), and is becoming increasingly recognized as a vital research area. One of the fundamental problems in causality is how to find the causal structure or the underlying causal model. Accordingly, one focus of this Special Issue is on causal discovery , i.e., how can we discover causal structure over a set of variables from observational data with automated procedures? Besides learning causality, another focus is on using causality to help understand and advance ML, that is, causality-inspired ML.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('267','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_267\" style=\"display:none;\"><div class=\"tp_abstract_entry\">Causality is a fundamental notion in science and engineering. It has attracted much interest across research communities in statistics, machine learning (ML), healthcare, and artificial intelligence (AI), and is becoming increasingly recognized as a vital research area. One of the fundamental problems in causality is how to find the causal structure or the underlying causal model. Accordingly, one focus of this Special Issue is on causal discovery , i.e., how can we discover causal structure over a set of variables from observational data with automated procedures? Besides learning causality, another focus is on using causality to help understand and advance ML, that is, causality-inspired ML.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('267','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_267\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.1109\/TNNLS.2024.3365968\" title=\"Follow DOI:10.1109\/TNNLS.2024.3365968\" target=\"_blank\">doi:10.1109\/TNNLS.2024.3365968<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('267','tp_links')\">Close<\/a><\/p><\/div><\/div><div class=\"tp_pub_image_right\"><img decoding=\"async\" name=\"IEEE Transactions on Neural Networks and Learning Systems Special Issue on Causal Discovery and Causality-Inspired Machine Learning\" src=\"https:\/\/pages.di.unipi.it\/bacciu\/wp-content\/uploads\/sites\/12\/2024\/01\/tnnls.jpg\" width=\"80\" alt=\"IEEE Transactions on Neural Networks and Learning Systems Special Issue on Causal Discovery and Causality-Inspired Machine Learning\" \/><\/div><\/div><div class=\"tp_publication tp_publication_inproceedings\"><div class=\"tp_pub_number\">16.<\/div><div class=\"tp_pub_info\"><p class=\"tp_pub_author\"> Resta, Michele;  Bacciu, Davide<\/p><p class=\"tp_pub_title\">Self-generated Replay Memories for Continual Neural Machine Translation <span class=\"tp_pub_type inproceedings\">Proceedings Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_booktitle\">Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers), <\/span><span class=\"tp_pub_additional_pages\">pp. 175\u2013191, <\/span><span class=\"tp_pub_additional_year\">2024<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_268\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('268','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_268\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('268','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_268\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@inproceedings{resta2024self,<br \/>\r\ntitle = {Self-generated Replay Memories for Continual Neural Machine Translation},<br \/>\r\nauthor = {Michele Resta and Davide Bacciu},<br \/>\r\nyear  = {2024},<br \/>\r\ndate = {2024-01-01},<br \/>\r\nurldate = {2024-01-01},<br \/>\r\nbooktitle = {Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)},<br \/>\r\npages = {175\u2013191},<br \/>\r\nabstract = {Modern Neural Machine Translation systems exhibit strong performance in several different languages and are constantly improving. Their ability to learn continuously is, however, still severely limited by the catastrophic forgetting issue. In this work, we leverage a key property of encoder-decoder Transformers, i.e. their generative ability, to propose a novel approach to continually learning Neural Machine Translation systems. We show how this can effectively learn on a stream of experiences comprising different languages, by leveraging a replay memory populated by using the model itself as a generator of parallel sentences. We empirically demonstrate that our approach can counteract catastrophic forgetting without requiring explicit memorization of training data. Code will be publicly available upon publication. Code: https:\/\/github.com\/m-resta\/sg-rep},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {inproceedings}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('268','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_268\" style=\"display:none;\"><div class=\"tp_abstract_entry\">Modern Neural Machine Translation systems exhibit strong performance in several different languages and are constantly improving. Their ability to learn continuously is, however, still severely limited by the catastrophic forgetting issue. In this work, we leverage a key property of encoder-decoder Transformers, i.e. their generative ability, to propose a novel approach to continually learning Neural Machine Translation systems. We show how this can effectively learn on a stream of experiences comprising different languages, by leveraging a replay memory populated by using the model itself as a generator of parallel sentences. We empirically demonstrate that our approach can counteract catastrophic forgetting without requiring explicit memorization of training data. Code will be publicly available upon publication. Code: https:\/\/github.com\/m-resta\/sg-rep<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('268','tp_abstract')\">Close<\/a><\/p><\/div><\/div><div class=\"tp_pub_image_right\"><img decoding=\"async\" name=\"Self-generated Replay Memories for Continual Neural Machine Translation\" src=\"https:\/\/pages.di.unipi.it\/bacciu\/wp-content\/uploads\/sites\/12\/2024\/08\/naacl-300x300.jpg\" width=\"80\" alt=\"Self-generated Replay Memories for Continual Neural Machine Translation\" \/><\/div><\/div><div class=\"tp_publication tp_publication_article\"><div class=\"tp_pub_number\">17.<\/div><div class=\"tp_pub_info\"><p class=\"tp_pub_author\"> Cossu, Andrea;  Carta, Antonio;  Passaro, Lucia;  Lomonaco, Vincenzo;  Tuytelaars, Tinne;  Bacciu, Davide<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('255','tp_links')\" style=\"cursor:pointer;\">Continual pre-training mitigates forgetting in language and vision<\/a> <span class=\"tp_pub_type article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">Neural Networks, <\/span><span class=\"tp_pub_additional_volume\">vol. 179, <\/span><span class=\"tp_pub_additional_pages\">pp. 106492, <\/span><span class=\"tp_pub_additional_year\">2024<\/span>, <span class=\"tp_pub_additional_issn\">ISSN: 0893-6080<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_255\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('255','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_255\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('255','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_255\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('255','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_255\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{COSSU2024106492,<br \/>\r\ntitle = {Continual pre-training mitigates forgetting in language and vision},<br \/>\r\nauthor = {Andrea Cossu and Antonio Carta and Lucia Passaro and Vincenzo Lomonaco and Tinne Tuytelaars and Davide Bacciu},<br \/>\r\nurl = {https:\/\/www.sciencedirect.com\/science\/article\/pii\/S0893608024004167},<br \/>\r\ndoi = {https:\/\/doi.org\/10.1016\/j.neunet.2024.106492},<br \/>\r\nissn = {0893-6080},<br \/>\r\nyear  = {2024},<br \/>\r\ndate = {2024-01-01},<br \/>\r\nurldate = {2024-01-01},<br \/>\r\njournal = {Neural Networks},<br \/>\r\nvolume = {179},<br \/>\r\npages = {106492},<br \/>\r\nabstract = {Pre-trained models are commonly used in Continual Learning to initialize the model before training on the stream of non-stationary data. However, pre-training is rarely applied during Continual Learning. We investigate the characteristics of the Continual Pre-Training scenario, where a model is continually pre-trained on a stream of incoming data and only later fine-tuned to different downstream tasks. We introduce an evaluation protocol for Continual Pre-Training which monitors forgetting against a Forgetting Control dataset not present in the continual stream. We disentangle the impact on forgetting of 3 main factors: the input modality (NLP, Vision), the architecture type (Transformer, ResNet) and the pre-training protocol (supervised, self-supervised). Moreover, we propose a Sample-Efficient Pre-training method (SEP) that speeds up the pre-training phase. We show that the pre-training protocol is the most important factor accounting for forgetting. Surprisingly, we discovered that self-supervised continual pre-training in both NLP and Vision is sufficient to mitigate forgetting without the use of any Continual Learning strategy. Other factors, like model depth, input modality and architecture type are not as crucial.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('255','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_255\" style=\"display:none;\"><div class=\"tp_abstract_entry\">Pre-trained models are commonly used in Continual Learning to initialize the model before training on the stream of non-stationary data. However, pre-training is rarely applied during Continual Learning. We investigate the characteristics of the Continual Pre-Training scenario, where a model is continually pre-trained on a stream of incoming data and only later fine-tuned to different downstream tasks. We introduce an evaluation protocol for Continual Pre-Training which monitors forgetting against a Forgetting Control dataset not present in the continual stream. We disentangle the impact on forgetting of 3 main factors: the input modality (NLP, Vision), the architecture type (Transformer, ResNet) and the pre-training protocol (supervised, self-supervised). Moreover, we propose a Sample-Efficient Pre-training method (SEP) that speeds up the pre-training phase. We show that the pre-training protocol is the most important factor accounting for forgetting. Surprisingly, we discovered that self-supervised continual pre-training in both NLP and Vision is sufficient to mitigate forgetting without the use of any Continual Learning strategy. Other factors, like model depth, input modality and architecture type are not as crucial.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('255','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_255\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"fas fa-globe\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/www.sciencedirect.com\/science\/article\/pii\/S0893608024004167\" title=\"https:\/\/www.sciencedirect.com\/science\/article\/pii\/S0893608024004167\" target=\"_blank\">https:\/\/www.sciencedirect.com\/science\/article\/pii\/S0893608024004167<\/a><\/li><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/https:\/\/doi.org\/10.1016\/j.neunet.2024.106492\" title=\"Follow DOI:https:\/\/doi.org\/10.1016\/j.neunet.2024.106492\" target=\"_blank\">doi:https:\/\/doi.org\/10.1016\/j.neunet.2024.106492<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('255','tp_links')\">Close<\/a><\/p><\/div><\/div><div class=\"tp_pub_image_right\"><img decoding=\"async\" name=\"Continual pre-training mitigates forgetting in language and vision\" src=\"https:\/\/pages.di.unipi.it\/bacciu\/wp-content\/uploads\/sites\/12\/2020\/06\/08936080.jpg\" width=\"80\" alt=\"Continual pre-training mitigates forgetting in language and vision\" \/><\/div><\/div><h3 class=\"tp_h3\" id=\"tp_h3_2023\">2023<\/h3><div class=\"tp_publication tp_publication_article\"><div class=\"tp_pub_number\">18.<\/div><div class=\"tp_pub_info\"><p class=\"tp_pub_author\"> Lepri, Marco;  Bacciu, Davide;  Santina, Cosimo Della<\/p><p class=\"tp_pub_title\">Neural Autoencoder-Based Structure-Preserving Model Order Reduction and Control Design for High-Dimensional Physical Systems <span class=\"tp_pub_type article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">IEEE Control Systems Letters, <\/span><span class=\"tp_pub_additional_year\">2023<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_248\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('248','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_248\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{lepri2023neural,<br \/>\r\ntitle = {Neural Autoencoder-Based Structure-Preserving Model Order Reduction and Control Design for High-Dimensional Physical Systems},<br \/>\r\nauthor = {Marco Lepri and Davide Bacciu and Cosimo Della Santina},<br \/>\r\nyear  = {2023},<br \/>\r\ndate = {2023-12-21},<br \/>\r\nurldate = {2023-01-01},<br \/>\r\njournal = {IEEE Control Systems Letters},<br \/>\r\npublisher = {IEEE},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('248','tp_bibtex')\">Close<\/a><\/p><\/div><\/div><div class=\"tp_pub_image_right\"><img decoding=\"async\" name=\"Neural Autoencoder-Based Structure-Preserving Model Order Reduction and Control Design for High-Dimensional Physical Systems\" src=\"http:\/\/pages.di.unipi.it\/bacciu\/wp-content\/uploads\/sites\/12\/2024\/01\/ieeecss.png\" width=\"80\" alt=\"Neural Autoencoder-Based Structure-Preserving Model Order Reduction and Control Design for High-Dimensional Physical Systems\" \/><\/div><\/div><div class=\"tp_publication tp_publication_inproceedings\"><div class=\"tp_pub_number\">19.<\/div><div class=\"tp_pub_info\"><p class=\"tp_pub_author\"> Georgiev, Dobrik Georgiev;  Numeroso, Danilo;  Bacciu, Davide;  Li\u00f2, Pietro<\/p><p class=\"tp_pub_title\">Neural algorithmic reasoning for combinatorial optimisation <span class=\"tp_pub_type inproceedings\">Proceedings Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_booktitle\">Learning on Graphs Conference, <\/span><span class=\"tp_pub_additional_pages\">pp. 28\u20131, <\/span><span class=\"tp_pub_additional_organization\">PMLR <\/span><span class=\"tp_pub_additional_year\">2023<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_264\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('264','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_264\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('264','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_264\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@inproceedings{georgiev2024neural,<br \/>\r\ntitle = {Neural algorithmic reasoning for combinatorial optimisation},<br \/>\r\nauthor = {Dobrik Georgiev Georgiev and Danilo Numeroso and Davide Bacciu and Pietro Li\u00f2},<br \/>\r\nyear  = {2023},<br \/>\r\ndate = {2023-12-15},<br \/>\r\nurldate = {2023-12-15},<br \/>\r\nbooktitle = {Learning on Graphs Conference},<br \/>\r\npages = {28\u20131},<br \/>\r\norganization = {PMLR},<br \/>\r\nabstract = {Solving NP-hard\/complete combinatorial problems with neural networks is a challenging research area that aims to surpass classical approximate algorithms. The long-term objective is to outperform hand-designed heuristics for NP-hard\/complete problems by learning to generate superior solutions solely from training data. Current neural-based methods for solving CO problems often overlook the inherent\" algorithmic\" nature of the problems. In contrast, heuristics designed for CO problems, eg TSP, frequently leverage well-established algorithms, such as those for finding the minimum spanning tree. In this paper, we propose leveraging recent advancements in neural algorithmic reasoning to improve the learning of CO problems. Specifically, we suggest pre-training our neural model on relevant algorithms before training it on CO instances. Our results demonstrate that, using this learning setup, we achieve superior performance compared to non-algorithmically informed deep learning models.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {inproceedings}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('264','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_264\" style=\"display:none;\"><div class=\"tp_abstract_entry\">Solving NP-hard\/complete combinatorial problems with neural networks is a challenging research area that aims to surpass classical approximate algorithms. The long-term objective is to outperform hand-designed heuristics for NP-hard\/complete problems by learning to generate superior solutions solely from training data. Current neural-based methods for solving CO problems often overlook the inherent&quot; algorithmic&quot; nature of the problems. In contrast, heuristics designed for CO problems, eg TSP, frequently leverage well-established algorithms, such as those for finding the minimum spanning tree. In this paper, we propose leveraging recent advancements in neural algorithmic reasoning to improve the learning of CO problems. Specifically, we suggest pre-training our neural model on relevant algorithms before training it on CO instances. Our results demonstrate that, using this learning setup, we achieve superior performance compared to non-algorithmically informed deep learning models.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('264','tp_abstract')\">Close<\/a><\/p><\/div><\/div><div class=\"tp_pub_image_right\"><img decoding=\"async\" name=\"Neural algorithmic reasoning for combinatorial optimisation\" src=\"https:\/\/pages.di.unipi.it\/bacciu\/wp-content\/uploads\/sites\/12\/2024\/01\/log.png\" width=\"80\" alt=\"Neural algorithmic reasoning for combinatorial optimisation\" \/><\/div><\/div><div class=\"tp_publication tp_publication_workshop\"><div class=\"tp_pub_number\">20.<\/div><div class=\"tp_pub_info\"><p class=\"tp_pub_author\"> Gravina, Alessio;  Lovisotto, Giulio;  Gallicchio, Claudio;  Bacciu, Davide;  Grohnfeldt, Claas<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('246','tp_links')\" style=\"cursor:pointer;\">Effective Non-Dissipative Propagation for Continuous-Time Dynamic Graphs<\/a> <span class=\"tp_pub_type workshop\">Workshop<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_booktitle\">Temporal Graph Learning Workshop, NeurIPS 2023, <\/span><span class=\"tp_pub_additional_year\">2023<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_246\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('246','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_246\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('246','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_246\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('246','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_246\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@workshop{Gravina2023b,<br \/>\r\ntitle = {Effective Non-Dissipative Propagation for Continuous-Time Dynamic Graphs},<br \/>\r\nauthor = {Alessio Gravina and Giulio Lovisotto and Claudio Gallicchio and Davide Bacciu and Claas Grohnfeldt},<br \/>\r\nurl = {https:\/\/openreview.net\/forum?id=zAHFC2LNEe, PDF},<br \/>\r\nyear  = {2023},<br \/>\r\ndate = {2023-12-11},<br \/>\r\nurldate = {2023-12-11},<br \/>\r\nbooktitle = {Temporal Graph Learning Workshop, NeurIPS 2023},<br \/>\r\nabstract = {Recent research on Deep Graph Networks (DGNs) has broadened the domain of learning on graphs to real-world systems of interconnected entities that evolve over time. This paper addresses prediction problems on graphs defined by a stream of events, possibly irregularly sampled over time, generally referred to as Continuous-Time Dynamic Graphs (C-TDGs). While many predictive problems on graphs may require capturing interactions between nodes at different distances, existing DGNs for C-TDGs are not designed to propagate and preserve long-range information - resulting in suboptimal performance. In this work, we present Continuous-Time Graph Anti-Symmetric Network (CTAN), a DGN for C-TDGs designed within the ordinary differential equations framework that enables efficient propagation of long-range dependencies. We show that our method robustly performs stable and non-dissipative information propagation over dynamically evolving graphs, where the number of ODE discretization steps allows scaling the propagation range. We empirically validate the proposed approach on several real and synthetic graph benchmarks, showing that CTAN leads to improved performance while enabling the propagation of long-range information},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {workshop}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('246','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_246\" style=\"display:none;\"><div class=\"tp_abstract_entry\">Recent research on Deep Graph Networks (DGNs) has broadened the domain of learning on graphs to real-world systems of interconnected entities that evolve over time. This paper addresses prediction problems on graphs defined by a stream of events, possibly irregularly sampled over time, generally referred to as Continuous-Time Dynamic Graphs (C-TDGs). While many predictive problems on graphs may require capturing interactions between nodes at different distances, existing DGNs for C-TDGs are not designed to propagate and preserve long-range information - resulting in suboptimal performance. In this work, we present Continuous-Time Graph Anti-Symmetric Network (CTAN), a DGN for C-TDGs designed within the ordinary differential equations framework that enables efficient propagation of long-range dependencies. We show that our method robustly performs stable and non-dissipative information propagation over dynamically evolving graphs, where the number of ODE discretization steps allows scaling the propagation range. We empirically validate the proposed approach on several real and synthetic graph benchmarks, showing that CTAN leads to improved performance while enabling the propagation of long-range information<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('246','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_246\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"fas fa-globe\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/openreview.net\/forum?id=zAHFC2LNEe\" title=\"PDF\" target=\"_blank\">PDF<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('246','tp_links')\">Close<\/a><\/p><\/div><\/div><div class=\"tp_pub_image_right\"><img decoding=\"async\" name=\"Effective Non-Dissipative Propagation for Continuous-Time Dynamic Graphs\" src=\"http:\/\/pages.di.unipi.it\/bacciu\/wp-content\/uploads\/sites\/12\/2020\/11\/neurips.png\" width=\"80\" alt=\"Effective Non-Dissipative Propagation for Continuous-Time Dynamic Graphs\" \/><\/div><\/div><div class=\"tp_publication tp_publication_article\"><div class=\"tp_pub_number\">21.<\/div><div class=\"tp_pub_info\"><p class=\"tp_pub_author\"> Errica, Federico;  Bacciu, Davide;  Micheli, Alessio<\/p><p class=\"tp_pub_title\">PyDGN: a Python Library for Flexible and Reproducible Research on Deep Learning for Graphs <span class=\"tp_pub_type article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">Journal of Open Source Software, <\/span><span class=\"tp_pub_additional_volume\">vol. 8, <\/span><span class=\"tp_pub_additional_number\">no. 90, <\/span><span class=\"tp_pub_additional_pages\">pp. 5713, <\/span><span class=\"tp_pub_additional_year\">2023<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_249\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('249','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_249\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{errica2023pydgn,<br \/>\r\ntitle = {PyDGN: a Python Library for Flexible and Reproducible Research on Deep Learning for Graphs},<br \/>\r\nauthor = {Federico Errica and Davide Bacciu and Alessio Micheli},<br \/>\r\nyear  = {2023},<br \/>\r\ndate = {2023-10-31},<br \/>\r\nurldate = {2023-01-01},<br \/>\r\njournal = {Journal of Open Source Software},<br \/>\r\nvolume = {8},<br \/>\r\nnumber = {90},<br \/>\r\npages = {5713},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('249','tp_bibtex')\">Close<\/a><\/p><\/div><\/div><div class=\"tp_pub_image_right\"><img decoding=\"async\" name=\"PyDGN: a Python Library for Flexible and Reproducible Research on Deep Learning for Graphs\" src=\"http:\/\/pages.di.unipi.it\/bacciu\/wp-content\/uploads\/sites\/12\/2024\/01\/joss.png\" width=\"80\" alt=\"PyDGN: a Python Library for Flexible and Reproducible Research on Deep Learning for Graphs\" \/><\/div><\/div><div class=\"tp_publication tp_publication_conference\"><div class=\"tp_pub_number\">22.<\/div><div class=\"tp_pub_info\"><p class=\"tp_pub_author\"> Errica, Federico;  Gravina, Alessio;  Bacciu, Davide;  Micheli, Alessio<\/p><p class=\"tp_pub_title\">Hidden Markov Models for Temporal Graph Representation Learning <span class=\"tp_pub_type conference\">Conference<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_booktitle\">Proceedings of the 31th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning , <\/span><span class=\"tp_pub_additional_year\">2023<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_234\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('234','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_234\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@conference{Errica2023,<br \/>\r\ntitle = {Hidden Markov Models for Temporal Graph Representation Learning},<br \/>\r\nauthor = {Federico Errica and Alessio Gravina and Davide Bacciu and Alessio Micheli},<br \/>\r\neditor = {Michel Verleysen},<br \/>\r\nyear  = {2023},<br \/>\r\ndate = {2023-10-04},<br \/>\r\nurldate = {2023-10-04},<br \/>\r\nbooktitle = {Proceedings of the 31th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning },<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {conference}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('234','tp_bibtex')\">Close<\/a><\/p><\/div><\/div><div class=\"tp_pub_image_right\"><img decoding=\"async\" name=\"Hidden Markov Models for Temporal Graph Representation Learning\" src=\"http:\/\/pages.di.unipi.it\/bacciu\/wp-content\/uploads\/sites\/12\/2024\/01\/esann.png\" width=\"80\" alt=\"Hidden Markov Models for Temporal Graph Representation Learning\" \/><\/div><\/div><div class=\"tp_publication tp_publication_conference\"><div class=\"tp_pub_number\">23.<\/div><div class=\"tp_pub_info\"><p class=\"tp_pub_author\"> Landolfi, Francesco;  Bacciu, Davide;  Numeroso, Danilo<\/p><p class=\"tp_pub_title\"> A Tropical View of Graph Neural Networks  <span class=\"tp_pub_type conference\">Conference<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_booktitle\">Proceedings of the 31th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning , <\/span><span class=\"tp_pub_additional_year\">2023<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_235\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('235','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_235\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@conference{Landolfi2023,<br \/>\r\ntitle = { A Tropical View of Graph Neural Networks },<br \/>\r\nauthor = {Francesco Landolfi and Davide Bacciu and Danilo Numeroso<br \/>\r\n<br \/>\r\n},<br \/>\r\neditor = {Michel Verleysen},<br \/>\r\nyear  = {2023},<br \/>\r\ndate = {2023-10-04},<br \/>\r\nurldate = {2023-10-04},<br \/>\r\nbooktitle = {Proceedings of the 31th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning },<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {conference}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('235','tp_bibtex')\">Close<\/a><\/p><\/div><\/div><div class=\"tp_pub_image_right\"><img decoding=\"async\" name=\" A Tropical View of Graph Neural Networks \" src=\"http:\/\/pages.di.unipi.it\/bacciu\/wp-content\/uploads\/sites\/12\/2024\/01\/esann.png\" width=\"80\" alt=\" A Tropical View of Graph Neural Networks \" \/><\/div><\/div><div class=\"tp_publication tp_publication_conference\"><div class=\"tp_pub_number\">24.<\/div><div class=\"tp_pub_info\"><p class=\"tp_pub_author\"> Ceni, Andrea;  Bacciu, Davide;  Caro, Valerio De;  Gallicchio, Claudio;  Oneto, Luca<\/p><p class=\"tp_pub_title\"> Improving Fairness via Intrinsic Plasticity in Echo State Networks  <span class=\"tp_pub_type conference\">Conference<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_booktitle\">Proceedings of the 31th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning , <\/span><span class=\"tp_pub_additional_year\">2023<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_236\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('236','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_236\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@conference{Ceni2023,<br \/>\r\ntitle = { Improving Fairness via Intrinsic Plasticity in Echo State Networks },<br \/>\r\nauthor = {Andrea Ceni and Davide Bacciu and Valerio De Caro and Claudio Gallicchio and Luca Oneto<br \/>\r\n},<br \/>\r\neditor = {Michel Verleysen},<br \/>\r\nyear  = {2023},<br \/>\r\ndate = {2023-10-04},<br \/>\r\nurldate = {2023-10-04},<br \/>\r\nbooktitle = {Proceedings of the 31th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning },<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {conference}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('236','tp_bibtex')\">Close<\/a><\/p><\/div><\/div><div class=\"tp_pub_image_right\"><img decoding=\"async\" name=\" Improving Fairness via Intrinsic Plasticity in Echo State Networks \" src=\"http:\/\/pages.di.unipi.it\/bacciu\/wp-content\/uploads\/sites\/12\/2024\/01\/esann.png\" width=\"80\" alt=\" Improving Fairness via Intrinsic Plasticity in Echo State Networks \" \/><\/div><\/div><div class=\"tp_publication tp_publication_conference\"><div class=\"tp_pub_number\">25.<\/div><div class=\"tp_pub_info\"><p class=\"tp_pub_author\"> Cossu, Andrea;  Spinnato, Francesco;  Guidotti, Riccardo;  Bacciu, Davide<\/p><p class=\"tp_pub_title\"> A Protocol for Continual Explanation of SHAP  <span class=\"tp_pub_type conference\">Conference<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_booktitle\">Proceedings of the 31th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning , <\/span><span class=\"tp_pub_additional_year\">2023<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_237\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('237','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_237\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@conference{Cossu2023,<br \/>\r\ntitle = { A Protocol for Continual Explanation of SHAP },<br \/>\r\nauthor = {Andrea Cossu and Francesco Spinnato and Riccardo Guidotti and Davide Bacciu},<br \/>\r\neditor = {Michel Verleysen},<br \/>\r\nyear  = {2023},<br \/>\r\ndate = {2023-10-04},<br \/>\r\nurldate = {2023-10-04},<br \/>\r\nbooktitle = {Proceedings of the 31th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning },<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {conference}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('237','tp_bibtex')\">Close<\/a><\/p><\/div><\/div><div class=\"tp_pub_image_right\"><img decoding=\"async\" name=\" A Protocol for Continual Explanation of SHAP \" src=\"http:\/\/pages.di.unipi.it\/bacciu\/wp-content\/uploads\/sites\/12\/2024\/01\/esann.png\" width=\"80\" alt=\" A Protocol for Continual Explanation of SHAP \" \/><\/div><\/div><div class=\"tp_publication tp_publication_conference\"><div class=\"tp_pub_number\">26.<\/div><div class=\"tp_pub_info\"><p class=\"tp_pub_author\"> Caro, Valerio De;  Mauro, Antonio Di;  Bacciu, Davide;  Gallicchio, Claudio<\/p><p class=\"tp_pub_title\"> Communication-Efficient Ridge Regression in Federated Echo State Networks  <span class=\"tp_pub_type conference\">Conference<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_booktitle\">Proceedings of the 31th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning , <\/span><span class=\"tp_pub_additional_year\">2023<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_238\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('238','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_238\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@conference{Caro2023,<br \/>\r\ntitle = { Communication-Efficient Ridge Regression in Federated Echo State Networks },<br \/>\r\nauthor = {Valerio De Caro and Antonio Di Mauro and Davide Bacciu and Claudio Gallicchio<br \/>\r\n<br \/>\r\n},<br \/>\r\neditor = {Michel Verleysen},<br \/>\r\nyear  = {2023},<br \/>\r\ndate = {2023-10-04},<br \/>\r\nurldate = {2023-10-04},<br \/>\r\nbooktitle = {Proceedings of the 31th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning },<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {conference}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('238','tp_bibtex')\">Close<\/a><\/p><\/div><\/div><div class=\"tp_pub_image_right\"><img decoding=\"async\" name=\" Communication-Efficient Ridge Regression in Federated Echo State Networks \" src=\"http:\/\/pages.di.unipi.it\/bacciu\/wp-content\/uploads\/sites\/12\/2024\/01\/esann.png\" width=\"80\" alt=\" Communication-Efficient Ridge Regression in Federated Echo State Networks \" \/><\/div><\/div><div class=\"tp_publication tp_publication_conference\"><div class=\"tp_pub_number\">27.<\/div><div class=\"tp_pub_info\"><p class=\"tp_pub_author\"> Bacciu, Davide;  Errica, Federico;  Micheli, Alessio;  Navarin, Nicol\u00f2;  Pasa, Luca;  Podda, Marco;  Zambon, Daniele<\/p><p class=\"tp_pub_title\">Graph Representation Learning  <span class=\"tp_pub_type conference\">Conference<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_booktitle\">Proceedings of the 31th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning , <\/span><span class=\"tp_pub_additional_year\">2023<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_239\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('239','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_239\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@conference{Bacciu2023c,<br \/>\r\ntitle = {Graph Representation Learning },<br \/>\r\nauthor = {Davide Bacciu and Federico Errica and Alessio Micheli and Nicol\u00f2 Navarin and Luca Pasa and Marco Podda and Daniele Zambon<br \/>\r\n<br \/>\r\n},<br \/>\r\neditor = {Michel Verleysen},<br \/>\r\nyear  = {2023},<br \/>\r\ndate = {2023-10-04},<br \/>\r\nurldate = {2023-10-04},<br \/>\r\nbooktitle = {Proceedings of the 31th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning },<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {conference}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('239','tp_bibtex')\">Close<\/a><\/p><\/div><\/div><div class=\"tp_pub_image_right\"><img decoding=\"async\" name=\"Graph Representation Learning \" src=\"http:\/\/pages.di.unipi.it\/bacciu\/wp-content\/uploads\/sites\/12\/2024\/01\/esann.png\" width=\"80\" alt=\"Graph Representation Learning \" \/><\/div><\/div><div class=\"tp_publication tp_publication_workshop\"><div class=\"tp_pub_number\">28.<\/div><div class=\"tp_pub_info\"><p class=\"tp_pub_author\"> Ceni, Andrea;  Cossu, Andrea;  Liu, Jingyue;  St\u00f6lzle, Maximilian;  Santina, Cosimo Della;  Gallicchio, Claudio;  Bacciu, Davide<\/p><p class=\"tp_pub_title\">Randomly Coupled Oscillators <span class=\"tp_pub_type workshop\">Workshop<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_booktitle\">Proceedings of the ECML\/PKDD Workshop on Deep Learning meets Neuromorphic Hardware, <\/span><span class=\"tp_pub_additional_year\">2023<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_252\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('252','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_252\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@workshop{Ceni2023c,<br \/>\r\ntitle = {Randomly Coupled Oscillators},<br \/>\r\nauthor = {Andrea Ceni and Andrea Cossu and Jingyue Liu and Maximilian St\u00f6lzle and Cosimo Della Santina and Claudio Gallicchio and Davide Bacciu},<br \/>\r\nyear  = {2023},<br \/>\r\ndate = {2023-09-18},<br \/>\r\nbooktitle = {Proceedings of the ECML\/PKDD Workshop on Deep Learning meets Neuromorphic Hardware},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {workshop}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('252','tp_bibtex')\">Close<\/a><\/p><\/div><\/div><div class=\"tp_pub_image_right\"><img decoding=\"async\" name=\"Randomly Coupled Oscillators\" src=\"http:\/\/pages.di.unipi.it\/bacciu\/wp-content\/uploads\/sites\/12\/2024\/01\/ecml2023.png\" width=\"80\" alt=\"Randomly Coupled Oscillators\" \/><\/div><\/div><div class=\"tp_publication tp_publication_workshop\"><div class=\"tp_pub_number\">29.<\/div><div class=\"tp_pub_info\"><p class=\"tp_pub_author\"> Gravina, Alessio;  Gallicchio, Claudio;  Bacciu, Davide<\/p><p class=\"tp_pub_title\">Non-Dissipative Propagation by Randomized Anti-Symmetric Deep Graph Networks <span class=\"tp_pub_type workshop\">Workshop<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_booktitle\">Proceedings of the ECML\/PKDD Workshop on Deep Learning meets Neuromorphic Hardware, <\/span><span class=\"tp_pub_additional_year\">2023<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_253\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('253','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_253\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@workshop{Gravina2023c,<br \/>\r\ntitle = {Non-Dissipative Propagation by Randomized Anti-Symmetric Deep Graph Networks},<br \/>\r\nauthor = {Alessio Gravina and Claudio Gallicchio and Davide Bacciu},<br \/>\r\nyear  = {2023},<br \/>\r\ndate = {2023-09-18},<br \/>\r\nurldate = {2023-09-18},<br \/>\r\nbooktitle = {Proceedings of the ECML\/PKDD Workshop on Deep Learning meets Neuromorphic Hardware},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {workshop}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('253','tp_bibtex')\">Close<\/a><\/p><\/div><\/div><div class=\"tp_pub_image_right\"><img decoding=\"async\" name=\"Non-Dissipative Propagation by Randomized Anti-Symmetric Deep Graph Networks\" src=\"http:\/\/pages.di.unipi.it\/bacciu\/wp-content\/uploads\/sites\/12\/2024\/01\/ecml2023.png\" width=\"80\" alt=\"Non-Dissipative Propagation by Randomized Anti-Symmetric Deep Graph Networks\" \/><\/div><\/div><div class=\"tp_publication tp_publication_conference\"><div class=\"tp_pub_number\">30.<\/div><div class=\"tp_pub_info\"><p class=\"tp_pub_author\"> Cosenza, Emanuele;  Valenti, Andrea;  Bacciu, Davide<\/p><p class=\"tp_pub_title\">Graph-based Polyphonic Multitrack Music Generation <span class=\"tp_pub_type conference\">Conference<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_booktitle\">Proceedings of the 32nd INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE (IJCAI 2023), <\/span><span class=\"tp_pub_additional_year\">2023<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_228\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('228','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_228\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@conference{Cosenza2023,<br \/>\r\ntitle = {Graph-based Polyphonic Multitrack Music Generation},<br \/>\r\nauthor = {Emanuele Cosenza and Andrea Valenti and Davide Bacciu },<br \/>\r\nyear  = {2023},<br \/>\r\ndate = {2023-08-19},<br \/>\r\nurldate = {2023-08-19},<br \/>\r\nbooktitle = {Proceedings of the 32nd INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE (IJCAI 2023)},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {conference}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('228','tp_bibtex')\">Close<\/a><\/p><\/div><\/div><div class=\"tp_pub_image_right\"><img decoding=\"async\" name=\"Graph-based Polyphonic Multitrack Music Generation\" src=\"http:\/\/pages.di.unipi.it\/bacciu\/wp-content\/uploads\/sites\/12\/2024\/01\/ijcai.png\" width=\"80\" alt=\"Graph-based Polyphonic Multitrack Music Generation\" \/><\/div><\/div><div class=\"tp_publication tp_publication_conference\"><div class=\"tp_pub_number\">31.<\/div><div class=\"tp_pub_info\"><p class=\"tp_pub_author\"> Hemati, Hamed;  Lomonaco, Vincenzo;  Bacciu, Davide;  Borth, Damian<\/p><p class=\"tp_pub_title\">Partial Hypernetworks for Continual Learning <span class=\"tp_pub_type conference\">Conference<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_booktitle\">Proceedings of the International Conference on Lifelong Learning Agents (CoLLAs 2023), <\/span><span class=\"tp_pub_additional_publisher\">Proceedings of Machine Learning Research, <\/span><span class=\"tp_pub_additional_year\">2023<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_232\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('232','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_232\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@conference{Hemati2023,<br \/>\r\ntitle = {Partial Hypernetworks for Continual Learning},<br \/>\r\nauthor = {Hamed Hemati and Vincenzo Lomonaco and Davide Bacciu and Damian Borth},<br \/>\r\nyear  = {2023},<br \/>\r\ndate = {2023-08-01},<br \/>\r\nurldate = {2023-08-01},<br \/>\r\nbooktitle = {Proceedings of the International Conference on Lifelong Learning Agents (CoLLAs 2023)},<br \/>\r\npublisher = {Proceedings of Machine Learning Research},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {conference}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('232','tp_bibtex')\">Close<\/a><\/p><\/div><\/div><div class=\"tp_pub_image_right\"><img decoding=\"async\" name=\"Partial Hypernetworks for Continual Learning\" src=\"http:\/\/pages.di.unipi.it\/bacciu\/wp-content\/uploads\/sites\/12\/2024\/01\/collas.png\" width=\"80\" alt=\"Partial Hypernetworks for Continual Learning\" \/><\/div><\/div><div class=\"tp_publication tp_publication_conference\"><div class=\"tp_pub_number\">32.<\/div><div class=\"tp_pub_info\"><p class=\"tp_pub_author\"> Hemati, Hamed;  Cossu, Andrea;  Carta, Antonio;  Hurtado, Julio;  Pellegrini, Lorenzo;  Bacciu, Davide;  Lomonaco, Vincenzo;  Borth, Damian<\/p><p class=\"tp_pub_title\">Class-Incremental Learning with Repetition  <span class=\"tp_pub_type conference\">Conference<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_booktitle\">Proceedings of the International Conference on Lifelong Learning Agents (CoLLAs 2023), <\/span><span class=\"tp_pub_additional_publisher\">Proceedings of Machine Learning Research, <\/span><span class=\"tp_pub_additional_year\">2023<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_233\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('233','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_233\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@conference{Hemati2023b,<br \/>\r\ntitle = {Class-Incremental Learning with Repetition },<br \/>\r\nauthor = {Hamed Hemati and Andrea Cossu and Antonio Carta and Julio Hurtado and Lorenzo Pellegrini and Davide Bacciu and Vincenzo Lomonaco and Damian Borth},<br \/>\r\nyear  = {2023},<br \/>\r\ndate = {2023-08-01},<br \/>\r\nurldate = {2023-08-01},<br \/>\r\nbooktitle = {Proceedings of the International Conference on Lifelong Learning Agents (CoLLAs 2023)},<br \/>\r\npublisher = {Proceedings of Machine Learning Research},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {conference}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('233','tp_bibtex')\">Close<\/a><\/p><\/div><\/div><div class=\"tp_pub_image_right\"><img decoding=\"async\" name=\"Class-Incremental Learning with Repetition \" src=\"http:\/\/pages.di.unipi.it\/bacciu\/wp-content\/uploads\/sites\/12\/2024\/01\/collas.png\" width=\"80\" alt=\"Class-Incremental Learning with Repetition \" \/><\/div><\/div><div class=\"tp_publication tp_publication_workshop\"><div class=\"tp_pub_number\">33.<\/div><div class=\"tp_pub_info\"><p class=\"tp_pub_author\"> Caro, Valerio De;  Bacciu, Davide;  Gallicchio, Claudio<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('242','tp_links')\" style=\"cursor:pointer;\">Decentralized Plasticity in Reservoir Dynamical Networks for Pervasive Environments<\/a> <span class=\"tp_pub_type workshop\">Workshop<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_booktitle\">Proceedings of the 2023 ICML Workshop on Localized Learning: Decentralized Model Updates via Non-Global Objectives \r\n, <\/span><span class=\"tp_pub_additional_year\">2023<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_resource_link\"><a id=\"tp_links_sh_242\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('242','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_242\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('242','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_242\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@workshop{nokey,<br \/>\r\ntitle = {Decentralized Plasticity in Reservoir Dynamical Networks for Pervasive Environments},<br \/>\r\nauthor = {Valerio De Caro and Davide Bacciu and Claudio Gallicchio<br \/>\r\n},<br \/>\r\nurl = {https:\/\/openreview.net\/forum?id=5hScPOeDaR, PDF},<br \/>\r\nyear  = {2023},<br \/>\r\ndate = {2023-07-29},<br \/>\r\nurldate = {2023-07-29},<br \/>\r\nbooktitle = {Proceedings of the 2023 ICML Workshop on Localized Learning: Decentralized Model Updates via Non-Global Objectives <br \/>\r\n},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {workshop}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('242','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_242\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"fas fa-globe\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/openreview.net\/forum?id=5hScPOeDaR\" title=\"PDF\" target=\"_blank\">PDF<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('242','tp_links')\">Close<\/a><\/p><\/div><\/div><div class=\"tp_pub_image_right\"><img decoding=\"async\" name=\"Decentralized Plasticity in Reservoir Dynamical Networks for Pervasive Environments\" src=\"http:\/\/pages.di.unipi.it\/bacciu\/wp-content\/uploads\/sites\/12\/2024\/01\/icml.png\" width=\"80\" alt=\"Decentralized Plasticity in Reservoir Dynamical Networks for Pervasive Environments\" \/><\/div><\/div><div class=\"tp_publication tp_publication_workshop\"><div class=\"tp_pub_number\">34.<\/div><div class=\"tp_pub_info\"><p class=\"tp_pub_author\"> Ceni, Andrea;  Cossu, Andrea;  Liu, Jingyue;  St\u00f6lzle, Maximilian;  Santina, Cosimo Della;  Gallicchio, Claudio;  Bacciu, Davide<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('240','tp_links')\" style=\"cursor:pointer;\">Randomly Coupled Oscillators for Time Series Processing<\/a> <span class=\"tp_pub_type workshop\">Workshop<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_booktitle\">Proceedings of the 2023 ICML Workshop on New Frontiers in Learning, Control, and Dynamical Systems , <\/span><span class=\"tp_pub_additional_year\">2023<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_resource_link\"><a id=\"tp_links_sh_240\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('240','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_240\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('240','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_240\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@workshop{Ceni2023b,<br \/>\r\ntitle = {Randomly Coupled Oscillators for Time Series Processing},<br \/>\r\nauthor = {Andrea Ceni and Andrea Cossu and Jingyue Liu and Maximilian St\u00f6lzle and Cosimo Della Santina and Claudio Gallicchio and Davide Bacciu},<br \/>\r\nurl = {https:\/\/openreview.net\/forum?id=fmn7PMykEb, PDF},<br \/>\r\nyear  = {2023},<br \/>\r\ndate = {2023-07-28},<br \/>\r\nurldate = {2023-07-28},<br \/>\r\nbooktitle = {Proceedings of the 2023 ICML Workshop on New Frontiers in Learning, Control, and Dynamical Systems },<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {workshop}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('240','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_240\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"fas fa-globe\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/openreview.net\/forum?id=fmn7PMykEb\" title=\"PDF\" target=\"_blank\">PDF<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('240','tp_links')\">Close<\/a><\/p><\/div><\/div><div class=\"tp_pub_image_right\"><img decoding=\"async\" name=\"Randomly Coupled Oscillators for Time Series Processing\" src=\"http:\/\/pages.di.unipi.it\/bacciu\/wp-content\/uploads\/sites\/12\/2024\/01\/icml.png\" width=\"80\" alt=\"Randomly Coupled Oscillators for Time Series Processing\" \/><\/div><\/div><div class=\"tp_publication tp_publication_workshop\"><div class=\"tp_pub_number\">35.<\/div><div class=\"tp_pub_info\"><p class=\"tp_pub_author\"> Massidda, Riccardo;  Landolfi, Francesco;  Cinquini, Martina;  Bacciu, Davide<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('241','tp_links')\" style=\"cursor:pointer;\">Differentiable Causal Discovery with Smooth Acyclic Orientations<\/a> <span class=\"tp_pub_type workshop\">Workshop<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_booktitle\">Proceedings of the 2023 ICML Workshop on Differentiable Almost Everything: Differentiable Relaxations, Algorithms, Operators, and Simulators , <\/span><span class=\"tp_pub_additional_year\">2023<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_resource_link\"><a id=\"tp_links_sh_241\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('241','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_241\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('241','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_241\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@workshop{Massidda2023b,<br \/>\r\ntitle = {Differentiable Causal Discovery with Smooth Acyclic Orientations},<br \/>\r\nauthor = {Riccardo Massidda and Francesco Landolfi and Martina Cinquini and Davide Bacciu},<br \/>\r\nurl = {https:\/\/openreview.net\/forum?id=IVwWgscehR, PDF},<br \/>\r\nyear  = {2023},<br \/>\r\ndate = {2023-07-28},<br \/>\r\nurldate = {2023-07-28},<br \/>\r\nbooktitle = {Proceedings of the 2023 ICML Workshop on Differentiable Almost Everything: Differentiable Relaxations, Algorithms, Operators, and Simulators },<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {workshop}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('241','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_241\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"fas fa-globe\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/openreview.net\/forum?id=IVwWgscehR\" title=\"PDF\" target=\"_blank\">PDF<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('241','tp_links')\">Close<\/a><\/p><\/div><\/div><div class=\"tp_pub_image_right\"><img decoding=\"async\" name=\"Differentiable Causal Discovery with Smooth Acyclic Orientations\" src=\"http:\/\/pages.di.unipi.it\/bacciu\/wp-content\/uploads\/sites\/12\/2024\/01\/icml.png\" width=\"80\" alt=\"Differentiable Causal Discovery with Smooth Acyclic Orientations\" \/><\/div><\/div><div class=\"tp_publication tp_publication_conference\"><div class=\"tp_pub_number\">36.<\/div><div class=\"tp_pub_info\"><p class=\"tp_pub_author\"> Simone, Lorenzo;  Bacciu, Davide<\/p><p class=\"tp_pub_title\">ECGAN: generative adversarial network for electrocardiography <span class=\"tp_pub_type conference\">Conference<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_booktitle\">Proceedings of Artificial Intelligence In Medicine 2023 (AIME 2023), <\/span><span class=\"tp_pub_additional_year\">2023<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_227\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('227','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_227\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@conference{nokey,<br \/>\r\ntitle = {ECGAN: generative adversarial network for electrocardiography},<br \/>\r\nauthor = {Lorenzo Simone and Davide Bacciu },<br \/>\r\nyear  = {2023},<br \/>\r\ndate = {2023-06-12},<br \/>\r\nurldate = {2023-06-12},<br \/>\r\nbooktitle = {Proceedings of Artificial Intelligence In Medicine 2023 (AIME 2023)},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {conference}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('227','tp_bibtex')\">Close<\/a><\/p><\/div><\/div><div class=\"tp_pub_image_right\"><img decoding=\"async\" name=\"ECGAN: generative adversarial network for electrocardiography\" src=\"http:\/\/pages.di.unipi.it\/bacciu\/wp-content\/uploads\/sites\/12\/2024\/01\/aime23.jpg\" width=\"80\" alt=\"ECGAN: generative adversarial network for electrocardiography\" \/><\/div><\/div><div class=\"tp_publication tp_publication_conference\"><div class=\"tp_pub_number\">37.<\/div><div class=\"tp_pub_info\"><p class=\"tp_pub_author\"> Lomonaco, Vincenzo;  Caro, Valerio De;  Gallicchio, Claudio;  Carta, Antonio;  Sardianos, Christos;  Varlamis, Iraklis;  Tserpes, Konstantinos;  Coppola, Massimo;  Marpena, Mina;  Politi, Sevasti;  Schoitsch, Erwin;  Bacciu, Davide<\/p><p class=\"tp_pub_title\">AI-Toolkit: a Microservices Architecture for Low-Code Decentralized Machine Intelligence <span class=\"tp_pub_type conference\">Conference<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_booktitle\">Proceedings of 2023 IEEE International Conference on Acoustics, Speech and Signal Processing, <\/span><span class=\"tp_pub_additional_year\">2023<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_230\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('230','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_230\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('230','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_230\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@conference{Lomonaco2023,<br \/>\r\ntitle = {AI-Toolkit: a Microservices Architecture for Low-Code Decentralized Machine Intelligence},<br \/>\r\nauthor = {Vincenzo Lomonaco and Valerio De Caro and Claudio Gallicchio and Antonio Carta and Christos Sardianos and Iraklis Varlamis and Konstantinos Tserpes and Massimo Coppola and Mina Marpena and Sevasti Politi and Erwin Schoitsch and Davide Bacciu},<br \/>\r\nyear  = {2023},<br \/>\r\ndate = {2023-06-04},<br \/>\r\nurldate = {2023-06-04},<br \/>\r\nbooktitle = {Proceedings of 2023 IEEE International Conference on Acoustics, Speech and Signal Processing},<br \/>\r\nabstract = {Artificial Intelligence and Machine Learning toolkits such as Scikit-learn, PyTorch and Tensorflow provide today a solid starting point for the rapid prototyping of R&D solutions. However, they can be hardly ported to heterogeneous decentralised hardware and real-world production environments. A common practice involves outsourcing deployment solutions to scalable cloud infrastructures such as Amazon SageMaker or Microsoft Azure. In this paper, we proposed an open-source microservices-based architecture for decentralised machine intelligence which aims at bringing R&D and deployment functionalities closer following a low-code approach. Such an approach would guarantee flexible integration of cutting-edge functionalities while preserving complete control over the deployed solutions at negligible costs and maintenance efforts.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {conference}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('230','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_230\" style=\"display:none;\"><div class=\"tp_abstract_entry\">Artificial Intelligence and Machine Learning toolkits such as Scikit-learn, PyTorch and Tensorflow provide today a solid starting point for the rapid prototyping of R&amp;D solutions. However, they can be hardly ported to heterogeneous decentralised hardware and real-world production environments. A common practice involves outsourcing deployment solutions to scalable cloud infrastructures such as Amazon SageMaker or Microsoft Azure. In this paper, we proposed an open-source microservices-based architecture for decentralised machine intelligence which aims at bringing R&amp;D and deployment functionalities closer following a low-code approach. Such an approach would guarantee flexible integration of cutting-edge functionalities while preserving complete control over the deployed solutions at negligible costs and maintenance efforts.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('230','tp_abstract')\">Close<\/a><\/p><\/div><\/div><div class=\"tp_pub_image_right\"><img decoding=\"async\" name=\"AI-Toolkit: a Microservices Architecture for Low-Code Decentralized Machine Intelligence\" src=\"http:\/\/pages.di.unipi.it\/bacciu\/wp-content\/uploads\/sites\/12\/2024\/01\/icassp23.png\" width=\"80\" alt=\"AI-Toolkit: a Microservices Architecture for Low-Code Decentralized Machine Intelligence\" \/><\/div><\/div><div class=\"tp_publication tp_publication_conference\"><div class=\"tp_pub_number\">38.<\/div><div class=\"tp_pub_info\"><p class=\"tp_pub_author\"> Caro, Valerio De;  Danzinger, Herbert;  Gallicchio, Claudio;  K\u00f6ncz\u00f6l, Clemens;  Lomonaco, Vincenzo;  Marmpena, Mina;  Marpena, Mina;  Politi, Sevasti;  Veledar, Omar;  Bacciu, Davide<\/p><p class=\"tp_pub_title\">Prediction of Driver's Stress Affection in Simulated Autonomous Driving Scenarios <span class=\"tp_pub_type conference\">Conference<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_booktitle\">Proceedings of 2023 IEEE International Conference on Acoustics, Speech and Signal Processing, <\/span><span class=\"tp_pub_additional_year\">2023<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_231\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('231','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_231\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('231','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_231\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@conference{DeCaro2023,<br \/>\r\ntitle = {Prediction of Driver's Stress Affection in Simulated Autonomous Driving Scenarios},<br \/>\r\nauthor = {Valerio De Caro and Herbert Danzinger and Claudio Gallicchio and Clemens K\u00f6ncz\u00f6l and Vincenzo Lomonaco and Mina Marmpena and Mina Marpena and Sevasti Politi and Omar Veledar and Davide Bacciu},<br \/>\r\nyear  = {2023},<br \/>\r\ndate = {2023-06-04},<br \/>\r\nurldate = {2023-06-04},<br \/>\r\nbooktitle = {Proceedings of 2023 IEEE International Conference on Acoustics, Speech and Signal Processing},<br \/>\r\nabstract = {We investigate the task of predicting stress affection from physiological data of users experiencing simulations of autonomous driving. We approach this task on two levels of granularity, depending on whether the prediction is performed at end of the simulation, or along the simulation. In the former, denoted as coarse-grained prediction, we employed Decision Trees. In the latter, denoted as fine-grained prediction, we employed Echo State Networks, a Recurrent Neural Network<br \/>\r\nthat allows efficient learning from temporal data and hence is<br \/>\r\nsuitable for pervasive environments. We conduct experiments on a private dataset of physiological data from people participating in multiple driving scenarios simulating different stressful events. The results show that the proposed model is capable of detecting conditions of event-related cognitive stress proving, the existence of a correlation between stressful events and the physiological data.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {conference}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('231','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_231\" style=\"display:none;\"><div class=\"tp_abstract_entry\">We investigate the task of predicting stress affection from physiological data of users experiencing simulations of autonomous driving. We approach this task on two levels of granularity, depending on whether the prediction is performed at end of the simulation, or along the simulation. In the former, denoted as coarse-grained prediction, we employed Decision Trees. In the latter, denoted as fine-grained prediction, we employed Echo State Networks, a Recurrent Neural Network<br \/>\r\nthat allows efficient learning from temporal data and hence is<br \/>\r\nsuitable for pervasive environments. We conduct experiments on a private dataset of physiological data from people participating in multiple driving scenarios simulating different stressful events. The results show that the proposed model is capable of detecting conditions of event-related cognitive stress proving, the existence of a correlation between stressful events and the physiological data.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('231','tp_abstract')\">Close<\/a><\/p><\/div><\/div><div class=\"tp_pub_image_right\"><img decoding=\"async\" name=\"Prediction of Driver's Stress Affection in Simulated Autonomous Driving Scenarios\" src=\"http:\/\/pages.di.unipi.it\/bacciu\/wp-content\/uploads\/sites\/12\/2024\/01\/icassp23.png\" width=\"80\" alt=\"Prediction of Driver's Stress Affection in Simulated Autonomous Driving Scenarios\" \/><\/div><\/div><div class=\"tp_publication tp_publication_conference\"><div class=\"tp_pub_number\">39.<\/div><div class=\"tp_pub_info\"><p class=\"tp_pub_author\"> Gravina, Alessio;  Bacciu, Davide;  Gallicchio, Claudio<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('225','tp_links')\" style=\"cursor:pointer;\">Anti-Symmetric DGN: a stable architecture for Deep Graph Networks<\/a> <span class=\"tp_pub_type conference\">Conference<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_booktitle\">Proceedings of the Eleventh International Conference on Learning Representations (ICLR 2023)  , <\/span><span class=\"tp_pub_additional_year\">2023<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_225\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('225','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_225\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('225','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_225\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('225','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_225\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@conference{Gravina2023,<br \/>\r\ntitle = {Anti-Symmetric DGN: a stable architecture for Deep Graph Networks},<br \/>\r\nauthor = {Alessio Gravina and Davide Bacciu and Claudio Gallicchio},<br \/>\r\nurl = {https:\/\/openreview.net\/pdf?id=J3Y7cgZOOS},<br \/>\r\nyear  = {2023},<br \/>\r\ndate = {2023-05-01},<br \/>\r\nurldate = {2023-05-01},<br \/>\r\nbooktitle = {Proceedings of the Eleventh International Conference on Learning Representations (ICLR 2023)  },<br \/>\r\nabstract = {Deep Graph Networks (DGNs) currently dominate the research landscape of learning from graphs, due to their efficiency and ability to implement an adaptive message-passing scheme between the nodes. However, DGNs are typically limited in their ability to propagate and preserve long-term dependencies between nodes, i.e., they suffer from the over-squashing phenomena. As a result, we can expect them to under-perform, since different problems require to capture interactions at different (and possibly large) radii in order to be effectively solved. In this work, we present Anti-Symmetric Deep Graph Networks (A-DGNs), a framework for stable and non-dissipative DGN design, conceived through the lens of ordinary differential equations. We give theoretical proof that our method is stable and non-dissipative, leading to two key results: long-range information between nodes is preserved, and no gradient vanishing or explosion occurs in training. We empirically validate the proposed approach on several graph benchmarks, showing that A-DGN yields to improved performance and enables to learn effectively even when dozens of layers are used.ers are used.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {conference}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('225','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_225\" style=\"display:none;\"><div class=\"tp_abstract_entry\">Deep Graph Networks (DGNs) currently dominate the research landscape of learning from graphs, due to their efficiency and ability to implement an adaptive message-passing scheme between the nodes. However, DGNs are typically limited in their ability to propagate and preserve long-term dependencies between nodes, i.e., they suffer from the over-squashing phenomena. As a result, we can expect them to under-perform, since different problems require to capture interactions at different (and possibly large) radii in order to be effectively solved. In this work, we present Anti-Symmetric Deep Graph Networks (A-DGNs), a framework for stable and non-dissipative DGN design, conceived through the lens of ordinary differential equations. We give theoretical proof that our method is stable and non-dissipative, leading to two key results: long-range information between nodes is preserved, and no gradient vanishing or explosion occurs in training. We empirically validate the proposed approach on several graph benchmarks, showing that A-DGN yields to improved performance and enables to learn effectively even when dozens of layers are used.ers are used.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('225','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_225\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"fas fa-globe\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/openreview.net\/pdf?id=J3Y7cgZOOS\" title=\"https:\/\/openreview.net\/pdf?id=J3Y7cgZOOS\" target=\"_blank\">https:\/\/openreview.net\/pdf?id=J3Y7cgZOOS<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('225','tp_links')\">Close<\/a><\/p><\/div><\/div><div class=\"tp_pub_image_right\"><img decoding=\"async\" name=\"Anti-Symmetric DGN: a stable architecture for Deep Graph Networks\" src=\"http:\/\/pages.di.unipi.it\/bacciu\/wp-content\/uploads\/sites\/12\/2024\/01\/iclr.png\" width=\"80\" alt=\"Anti-Symmetric DGN: a stable architecture for Deep Graph Networks\" \/><\/div><\/div><div class=\"tp_publication tp_publication_conference\"><div class=\"tp_pub_number\">40.<\/div><div class=\"tp_pub_info\"><p class=\"tp_pub_author\"> Numeroso, Danilo;  Bacciu, Davide;  Veli\u010dkovi\u0107, Petar<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('226','tp_links')\" style=\"cursor:pointer;\">Dual Algorithmic Reasoning<\/a> <span class=\"tp_pub_type conference\">Conference<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_booktitle\">Proceedings of the Eleventh International Conference on Learning Representations (ICLR 2023), <\/span><span class=\"tp_pub_additional_year\">2023<\/span><span class=\"tp_pub_additional_note\">, (Notable Spotlight paper)<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_226\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('226','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_226\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('226','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_226\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('226','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_226\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@conference{Numeroso2023,<br \/>\r\ntitle = {Dual Algorithmic Reasoning},<br \/>\r\nauthor = {Danilo Numeroso and Davide Bacciu and Petar Veli\u010dkovi\u0107},<br \/>\r\nurl = {https:\/\/openreview.net\/pdf?id=hhvkdRdWt1F},<br \/>\r\nyear  = {2023},<br \/>\r\ndate = {2023-05-01},<br \/>\r\nurldate = {2023-05-01},<br \/>\r\nbooktitle = {Proceedings of the Eleventh International Conference on Learning Representations (ICLR 2023)},<br \/>\r\nabstract = {Neural Algorithmic Reasoning is an emerging area of machine learning which seeks to infuse algorithmic computation in neural networks, typically by training neural models to approximate steps of classical algorithms. In this context, much of the current work has focused on learning reachability and shortest path graph algorithms, showing that joint learning on similar algorithms is beneficial for generalisation. However, when targeting more complex problems, such \"similar\" algorithms become more difficult to find. Here, we propose to learn algorithms by exploiting duality of the underlying algorithmic problem. Many algorithms solve optimisation problems. We demonstrate that simultaneously learning the dual definition of these optimisation problems in algorithmic learning allows for better learning and qualitatively better solutions. Specifically, we exploit the max-flow min-cut theorem to simultaneously learn these two algorithms over synthetically generated graphs, demonstrating the effectiveness of the proposed approach. We then validate the real-world utility of our dual algorithmic reasoner by deploying it on a challenging brain vessel classification task, which likely depends on the vessels\u2019 flow properties. We demonstrate a clear performance gain when using our model within such a context, and empirically show that learning the max-flow and min-cut algorithms together is critical for achieving such a result.},<br \/>\r\nnote = {Notable Spotlight paper},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {conference}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('226','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_226\" style=\"display:none;\"><div class=\"tp_abstract_entry\">Neural Algorithmic Reasoning is an emerging area of machine learning which seeks to infuse algorithmic computation in neural networks, typically by training neural models to approximate steps of classical algorithms. In this context, much of the current work has focused on learning reachability and shortest path graph algorithms, showing that joint learning on similar algorithms is beneficial for generalisation. However, when targeting more complex problems, such &quot;similar&quot; algorithms become more difficult to find. Here, we propose to learn algorithms by exploiting duality of the underlying algorithmic problem. Many algorithms solve optimisation problems. We demonstrate that simultaneously learning the dual definition of these optimisation problems in algorithmic learning allows for better learning and qualitatively better solutions. Specifically, we exploit the max-flow min-cut theorem to simultaneously learn these two algorithms over synthetically generated graphs, demonstrating the effectiveness of the proposed approach. We then validate the real-world utility of our dual algorithmic reasoner by deploying it on a challenging brain vessel classification task, which likely depends on the vessels\u2019 flow properties. We demonstrate a clear performance gain when using our model within such a context, and empirically show that learning the max-flow and min-cut algorithms together is critical for achieving such a result.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('226','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_226\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"fas fa-globe\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/openreview.net\/pdf?id=hhvkdRdWt1F\" title=\"https:\/\/openreview.net\/pdf?id=hhvkdRdWt1F\" target=\"_blank\">https:\/\/openreview.net\/pdf?id=hhvkdRdWt1F<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('226','tp_links')\">Close<\/a><\/p><\/div><\/div><div class=\"tp_pub_image_right\"><img decoding=\"async\" name=\"Dual Algorithmic Reasoning\" src=\"http:\/\/pages.di.unipi.it\/bacciu\/wp-content\/uploads\/sites\/12\/2024\/01\/iclr.png\" width=\"80\" alt=\"Dual Algorithmic Reasoning\" \/><\/div><\/div><div class=\"tp_publication tp_publication_conference\"><div class=\"tp_pub_number\">41.<\/div><div class=\"tp_pub_info\"><p class=\"tp_pub_author\"> Massidda, Riccardo;  Geiger, Atticus;  Icard, Thomas;  Bacciu, Davide<\/p><p class=\"tp_pub_title\">Causal Abstraction with Soft Interventions <span class=\"tp_pub_type conference\">Conference<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_booktitle\">Proceedings of the 2nd Conference on Causal Learning and Reasoning (CLeaR 2023), <\/span><span class=\"tp_pub_additional_publisher\">PMLR, <\/span><span class=\"tp_pub_additional_year\">2023<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_223\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('223','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_223\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@conference{Massidda2023,<br \/>\r\ntitle = {Causal Abstraction with Soft Interventions},<br \/>\r\nauthor = {Riccardo Massidda and Atticus Geiger and Thomas Icard and Davide Bacciu},<br \/>\r\nyear  = {2023},<br \/>\r\ndate = {2023-04-17},<br \/>\r\nurldate = {2023-04-17},<br \/>\r\nbooktitle = {Proceedings of the 2nd Conference on Causal Learning and Reasoning (CLeaR 2023)},<br \/>\r\npublisher = {PMLR},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {conference}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('223','tp_bibtex')\">Close<\/a><\/p><\/div><\/div><div class=\"tp_pub_image_right\"><img decoding=\"async\" name=\"Causal Abstraction with Soft Interventions\" src=\"http:\/\/pages.di.unipi.it\/bacciu\/wp-content\/uploads\/sites\/12\/2024\/01\/clear.png\" width=\"80\" alt=\"Causal Abstraction with Soft Interventions\" \/><\/div><\/div><div class=\"tp_publication tp_publication_workshop\"><div class=\"tp_pub_number\">42.<\/div><div class=\"tp_pub_info\"><p class=\"tp_pub_author\"> Gravina, Alessio;  Bacciu, Davide;  Gallicchio, Claudio<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('229','tp_links')\" style=\"cursor:pointer;\">Non-Dissipative Propagation by Anti-Symmetric Deep Graph Networks<\/a> <span class=\"tp_pub_type workshop\">Workshop<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_booktitle\">Proceedigns of the  Ninth International Workshop on Deep Learning on Graphs: Method and Applications (DLG-AAAI\u201923), <\/span><span class=\"tp_pub_additional_year\">2023<\/span><span class=\"tp_pub_additional_note\">, (Winner of the Best Student Paper Award at DLG-AAAI23)<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_229\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('229','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_229\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('229','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_229\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('229','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_229\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@workshop{nokey,<br \/>\r\ntitle = {Non-Dissipative Propagation by Anti-Symmetric Deep Graph Networks},<br \/>\r\nauthor = {Alessio Gravina and Davide Bacciu and Claudio Gallicchio},<br \/>\r\nurl = {https:\/\/drive.google.com\/file\/d\/1uPHhjwSa3g_hRvHwx6UnbMLgGN_cAqMu\/view. PDF},<br \/>\r\nyear  = {2023},<br \/>\r\ndate = {2023-02-13},<br \/>\r\nurldate = {2023-02-13},<br \/>\r\nbooktitle = {Proceedigns of the  Ninth International Workshop on Deep Learning on Graphs: Method and Applications (DLG-AAAI\u201923)},<br \/>\r\nabstract = {Deep Graph Networks (DGNs) currently dominate the research landscape of learning from graphs, due to the efficiency of their adaptive message-passing scheme between nodes. However, DGNs are typically limited in their ability to propagate and preserve long-term dependencies between nodes, i.e., they suffer from the over-squashing phenomena. This reduces their effectiveness, since predictive problems may require to capture interactions at different, and possibly large, radii in order to be effectively solved. In this work, we present Anti-Symmetric DGN (A-DGN), a framework forstable and non-dissipative DGN design, conceived through the lens of ordinary differential equations. We give theoretical proof that our method is stable and non-dissipative, leading to two key results: long-range information between nodes is preserved, and no gradient vanishing or explosion occurs in training. We empirically validate the proposed approach on several graph benchmarks, showing that A-DGN yields to improved performance and enables to learn effectively even when dozens of layers are used.},<br \/>\r\nnote = {Winner of the Best Student Paper Award at DLG-AAAI23},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {workshop}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('229','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_229\" style=\"display:none;\"><div class=\"tp_abstract_entry\">Deep Graph Networks (DGNs) currently dominate the research landscape of learning from graphs, due to the efficiency of their adaptive message-passing scheme between nodes. However, DGNs are typically limited in their ability to propagate and preserve long-term dependencies between nodes, i.e., they suffer from the over-squashing phenomena. This reduces their effectiveness, since predictive problems may require to capture interactions at different, and possibly large, radii in order to be effectively solved. In this work, we present Anti-Symmetric DGN (A-DGN), a framework forstable and non-dissipative DGN design, conceived through the lens of ordinary differential equations. We give theoretical proof that our method is stable and non-dissipative, leading to two key results: long-range information between nodes is preserved, and no gradient vanishing or explosion occurs in training. We empirically validate the proposed approach on several graph benchmarks, showing that A-DGN yields to improved performance and enables to learn effectively even when dozens of layers are used.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('229','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_229\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"fas fa-globe\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/drive.google.com\/file\/d\/1uPHhjwSa3g_hRvHwx6UnbMLgGN_cAqMu\/view. PDF\" title=\"https:\/\/drive.google.com\/file\/d\/1uPHhjwSa3g_hRvHwx6UnbMLgGN_cAqMu\/view. PDF\" target=\"_blank\">https:\/\/drive.google.com\/file\/d\/1uPHhjwSa3g_hRvHwx6UnbMLgGN_cAqMu\/view. PDF<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('229','tp_links')\">Close<\/a><\/p><\/div><\/div><div class=\"tp_pub_image_right\"><img decoding=\"async\" name=\"Non-Dissipative Propagation by Anti-Symmetric Deep Graph Networks\" src=\"http:\/\/pages.di.unipi.it\/bacciu\/wp-content\/uploads\/sites\/12\/2024\/01\/aaai.jpeg\" width=\"80\" alt=\"Non-Dissipative Propagation by Anti-Symmetric Deep Graph Networks\" \/><\/div><\/div><div class=\"tp_publication tp_publication_conference\"><div class=\"tp_pub_number\">43.<\/div><div class=\"tp_pub_info\"><p class=\"tp_pub_author\"> Bacciu, Davide;  Conte, Alessio;  Landolfi, Francesco<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('219','tp_links')\" style=\"cursor:pointer;\">Generalizing Downsampling from Regular Data to Graphs<\/a> <span class=\"tp_pub_type conference\">Conference<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_booktitle\">Proceedings of the Thirty-Seventh AAAI Conference on Artificial Intelligence, <\/span><span class=\"tp_pub_additional_year\">2023<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_219\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('219','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_219\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('219','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_219\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('219','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_219\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@conference{Bacciu2023,<br \/>\r\ntitle = {Generalizing Downsampling from Regular Data to Graphs},<br \/>\r\nauthor = {Davide Bacciu and Alessio Conte and Francesco Landolfi},<br \/>\r\nurl = {https:\/\/arxiv.org\/abs\/2208.03523, Arxiv},<br \/>\r\nyear  = {2023},<br \/>\r\ndate = {2023-02-07},<br \/>\r\nurldate = {2023-02-07},<br \/>\r\nbooktitle = {Proceedings of the Thirty-Seventh AAAI Conference on Artificial Intelligence},<br \/>\r\nabstract = {Downsampling produces coarsened, multi-resolution representations of data and it is used, for example, to produce lossy compression and visualization of large images, reduce computational costs, and boost deep neural representation learning. Unfortunately, due to their lack of a regular structure, there is still no consensus on how downsampling should apply to graphs and linked data. Indeed reductions in graph data are still needed for the goals described above, but reduction mechanisms do not have the same focus on preserving topological structures and properties, while allowing for resolution-tuning, as is the case in regular data downsampling. In this paper, we take a step in this direction, introducing a unifying interpretation of downsampling in regular and graph data. In particular, we define a graph coarsening mechanism which is a graph-structured counterpart of controllable equispaced coarsening mechanisms in regular data. We prove theoretical guarantees for distortion bounds on path lengths, as well as the ability to preserve key topological properties in the coarsened graphs. We leverage these concepts to define a graph pooling mechanism that we empirically assess in graph classification tasks, providing a greedy algorithm that allows efficient parallel implementation on GPUs, and showing that it compares favorably against pooling methods in literature. },<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {conference}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('219','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_219\" style=\"display:none;\"><div class=\"tp_abstract_entry\">Downsampling produces coarsened, multi-resolution representations of data and it is used, for example, to produce lossy compression and visualization of large images, reduce computational costs, and boost deep neural representation learning. Unfortunately, due to their lack of a regular structure, there is still no consensus on how downsampling should apply to graphs and linked data. Indeed reductions in graph data are still needed for the goals described above, but reduction mechanisms do not have the same focus on preserving topological structures and properties, while allowing for resolution-tuning, as is the case in regular data downsampling. In this paper, we take a step in this direction, introducing a unifying interpretation of downsampling in regular and graph data. In particular, we define a graph coarsening mechanism which is a graph-structured counterpart of controllable equispaced coarsening mechanisms in regular data. We prove theoretical guarantees for distortion bounds on path lengths, as well as the ability to preserve key topological properties in the coarsened graphs. We leverage these concepts to define a graph pooling mechanism that we empirically assess in graph classification tasks, providing a greedy algorithm that allows efficient parallel implementation on GPUs, and showing that it compares favorably against pooling methods in literature. <\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('219','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_219\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"ai ai-arxiv\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/arxiv.org\/abs\/2208.03523\" title=\"Arxiv\" target=\"_blank\">Arxiv<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('219','tp_links')\">Close<\/a><\/p><\/div><\/div><div class=\"tp_pub_image_right\"><img decoding=\"async\" name=\"Generalizing Downsampling from Regular Data to Graphs\" src=\"http:\/\/pages.di.unipi.it\/bacciu\/wp-content\/uploads\/sites\/12\/2024\/01\/aaai.jpeg\" width=\"80\" alt=\"Generalizing Downsampling from Regular Data to Graphs\" \/><\/div><\/div><div class=\"tp_publication tp_publication_article\"><div class=\"tp_pub_number\">44.<\/div><div class=\"tp_pub_info\"><p class=\"tp_pub_author\"> Bacciu, Davide;  Errica, Federico;  Gravina, Alessio;  Madeddu, Lorenzo;  Podda, Marco;  Stilo, Giovanni<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('224','tp_links')\" style=\"cursor:pointer;\">Deep Graph Networks for Drug Repurposing with Multi-Protein Targets<\/a> <span class=\"tp_pub_type article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">IEEE Transactions on Emerging Topics in Computing, 2023, <\/span><span class=\"tp_pub_additional_year\">2023<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_resource_link\"><a id=\"tp_links_sh_224\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('224','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_224\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('224','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_224\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{Bacciu2023b,<br \/>\r\ntitle = {Deep Graph Networks for Drug Repurposing with Multi-Protein Targets},<br \/>\r\nauthor = {Davide Bacciu and Federico Errica and Alessio Gravina and Lorenzo Madeddu and Marco Podda and Giovanni Stilo},<br \/>\r\ndoi = {10.1109\/TETC.2023.3238963},<br \/>\r\nyear  = {2023},<br \/>\r\ndate = {2023-02-01},<br \/>\r\nurldate = {2023-02-01},<br \/>\r\njournal = {IEEE Transactions on Emerging Topics in Computing, 2023},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('224','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_224\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.1109\/TETC.2023.3238963\" title=\"Follow DOI:10.1109\/TETC.2023.3238963\" target=\"_blank\">doi:10.1109\/TETC.2023.3238963<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('224','tp_links')\">Close<\/a><\/p><\/div><\/div><div class=\"tp_pub_image_right\"><img decoding=\"async\" name=\"Deep Graph Networks for Drug Repurposing with Multi-Protein Targets\" src=\"http:\/\/pages.di.unipi.it\/bacciu\/wp-content\/uploads\/sites\/12\/2024\/01\/ieeetranscomp.jpg\" width=\"80\" alt=\"Deep Graph Networks for Drug Repurposing with Multi-Protein Targets\" \/><\/div><\/div><div class=\"tp_publication tp_publication_article\"><div class=\"tp_pub_number\">45.<\/div><div class=\"tp_pub_info\"><p class=\"tp_pub_author\"> Lanciano, Giacomo;  Galli, Filippo;  Cucinotta, Tommaso;  Bacciu, Davide;  Passarella, Andrea<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('243','tp_links')\" style=\"cursor:pointer;\">Extending OpenStack Monasca for Predictive Elasticity Control<\/a> <span class=\"tp_pub_type article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">Big Data Mining and Analytics, <\/span><span class=\"tp_pub_additional_year\">2023<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_resource_link\"><a id=\"tp_links_sh_243\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('243','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_243\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('243','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_243\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{Lanciano2023extending,<br \/>\r\ntitle = {Extending OpenStack Monasca for Predictive Elasticity Control},<br \/>\r\nauthor = {Giacomo Lanciano and Filippo Galli and Tommaso Cucinotta and Davide Bacciu and Andrea Passarella},<br \/>\r\ndoi = {10.26599\/BDMA.2023.9020014},<br \/>\r\nyear  = {2023},<br \/>\r\ndate = {2023-01-01},<br \/>\r\nurldate = {2023-01-01},<br \/>\r\njournal = {Big Data Mining and Analytics},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('243','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_243\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.26599\/BDMA.2023.9020014\" title=\"Follow DOI:10.26599\/BDMA.2023.9020014\" target=\"_blank\">doi:10.26599\/BDMA.2023.9020014<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('243','tp_links')\">Close<\/a><\/p><\/div><\/div><div class=\"tp_pub_image_right\"><img decoding=\"async\" name=\"Extending OpenStack Monasca for Predictive Elasticity Control\" src=\"http:\/\/pages.di.unipi.it\/bacciu\/wp-content\/uploads\/sites\/12\/2024\/01\/bda.jpg\" width=\"80\" alt=\"Extending OpenStack Monasca for Predictive Elasticity Control\" \/><\/div><\/div><div class=\"tp_publication tp_publication_article\"><div class=\"tp_pub_number\">46.<\/div><div class=\"tp_pub_info\"><p class=\"tp_pub_author\"> Caro, Valerio De;  Gallicchio, Claudio;  Bacciu, Davide<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('244','tp_links')\" style=\"cursor:pointer;\">Continual adaptation of federated reservoirs in pervasive environments<\/a> <span class=\"tp_pub_type article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">Neurocomputing, <\/span><span class=\"tp_pub_additional_pages\">pp. 126638, <\/span><span class=\"tp_pub_additional_year\">2023<\/span>, <span class=\"tp_pub_additional_issn\">ISSN: 0925-2312<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_244\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('244','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_244\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('244','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_244\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('244','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_244\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{DECARO2023126638,<br \/>\r\ntitle = {Continual adaptation of federated reservoirs in pervasive environments},<br \/>\r\nauthor = {Valerio De Caro and Claudio Gallicchio and Davide Bacciu},<br \/>\r\nurl = {https:\/\/www.sciencedirect.com\/science\/article\/pii\/S0925231223007610},<br \/>\r\ndoi = {https:\/\/doi.org\/10.1016\/j.neucom.2023.126638},<br \/>\r\nissn = {0925-2312},<br \/>\r\nyear  = {2023},<br \/>\r\ndate = {2023-01-01},<br \/>\r\nurldate = {2023-01-01},<br \/>\r\njournal = {Neurocomputing},<br \/>\r\npages = {126638},<br \/>\r\nabstract = {When performing learning tasks in pervasive environments, the main challenge arises from the need of combining federated and continual settings. The former comes from the massive distribution of devices with privacy-regulated data. The latter is required by the low resources of the participating devices, which may retain data for short periods of time. In this paper, we propose a setup for learning with Echo State Networks (ESNs) in pervasive environments. Our proposal focuses on the use of Intrinsic Plasticity (IP), a gradient-based method for adapting the reservoir\u2019s non-linearity. First, we extend the objective function of IP to include the uncertainty arising from the distribution of the data over space and time. Then, we propose Federated Intrinsic Plasticity (FedIP), which is intended for client\u2013server federated topologies with stationary data, and adapts the learning scheme provided by Federated Averaging (FedAvg) to include the learning rule of IP. Finally, we further extend this algorithm for learning to Federated Continual Intrinsic Plasticity (FedCLIP) to equip clients with CL strategies for dealing with continuous data streams. We evaluate our approach on an incremental setup built upon real-world datasets from human monitoring, where we tune the complexity of the scenario in terms of the distribution of the data over space and time. Results show that both our algorithms improve the representation capabilities and the performance of the ESN, while being robust to catastrophic forgetting.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('244','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_244\" style=\"display:none;\"><div class=\"tp_abstract_entry\">When performing learning tasks in pervasive environments, the main challenge arises from the need of combining federated and continual settings. The former comes from the massive distribution of devices with privacy-regulated data. The latter is required by the low resources of the participating devices, which may retain data for short periods of time. In this paper, we propose a setup for learning with Echo State Networks (ESNs) in pervasive environments. Our proposal focuses on the use of Intrinsic Plasticity (IP), a gradient-based method for adapting the reservoir\u2019s non-linearity. First, we extend the objective function of IP to include the uncertainty arising from the distribution of the data over space and time. Then, we propose Federated Intrinsic Plasticity (FedIP), which is intended for client\u2013server federated topologies with stationary data, and adapts the learning scheme provided by Federated Averaging (FedAvg) to include the learning rule of IP. Finally, we further extend this algorithm for learning to Federated Continual Intrinsic Plasticity (FedCLIP) to equip clients with CL strategies for dealing with continuous data streams. We evaluate our approach on an incremental setup built upon real-world datasets from human monitoring, where we tune the complexity of the scenario in terms of the distribution of the data over space and time. Results show that both our algorithms improve the representation capabilities and the performance of the ESN, while being robust to catastrophic forgetting.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('244','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_244\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"fas fa-globe\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/www.sciencedirect.com\/science\/article\/pii\/S0925231223007610\" title=\"https:\/\/www.sciencedirect.com\/science\/article\/pii\/S0925231223007610\" target=\"_blank\">https:\/\/www.sciencedirect.com\/science\/article\/pii\/S0925231223007610<\/a><\/li><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/https:\/\/doi.org\/10.1016\/j.neucom.2023.126638\" title=\"Follow DOI:https:\/\/doi.org\/10.1016\/j.neucom.2023.126638\" target=\"_blank\">doi:https:\/\/doi.org\/10.1016\/j.neucom.2023.126638<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('244','tp_links')\">Close<\/a><\/p><\/div><\/div><div class=\"tp_pub_image_right\"><img decoding=\"async\" name=\"Continual adaptation of federated reservoirs in pervasive environments\" src=\"http:\/\/pages.di.unipi.it\/bacciu\/wp-content\/uploads\/sites\/12\/2024\/01\/neurocomputing.png\" width=\"80\" alt=\"Continual adaptation of federated reservoirs in pervasive environments\" \/><\/div><\/div><div class=\"tp_publication tp_publication_article\"><div class=\"tp_pub_number\">47.<\/div><div class=\"tp_pub_info\"><p class=\"tp_pub_author\"> Lanciano, Giacomo;  Andreoli, Remo;  Cucinotta, Tommaso;  Bacciu, Davide;  Passarella, Andrea<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('245','tp_links')\" style=\"cursor:pointer;\">A 2-phase Strategy For Intelligent Cloud Operations<\/a> <span class=\"tp_pub_type article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">IEEE Access, <\/span><span class=\"tp_pub_additional_pages\">pp. 1-1, <\/span><span class=\"tp_pub_additional_year\">2023<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_resource_link\"><a id=\"tp_links_sh_245\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('245','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_245\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('245','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_245\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{10239346,<br \/>\r\ntitle = {A 2-phase Strategy For Intelligent Cloud Operations},<br \/>\r\nauthor = {Giacomo Lanciano and Remo Andreoli and Tommaso Cucinotta and Davide Bacciu and Andrea Passarella},<br \/>\r\ndoi = {10.1109\/ACCESS.2023.3312218},<br \/>\r\nyear  = {2023},<br \/>\r\ndate = {2023-01-01},<br \/>\r\nurldate = {2023-01-01},<br \/>\r\njournal = {IEEE Access},<br \/>\r\npages = {1-1},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('245','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_245\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.1109\/ACCESS.2023.3312218\" title=\"Follow DOI:10.1109\/ACCESS.2023.3312218\" target=\"_blank\">doi:10.1109\/ACCESS.2023.3312218<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('245','tp_links')\">Close<\/a><\/p><\/div><\/div><div class=\"tp_pub_image_right\"><img decoding=\"async\" name=\"A 2-phase Strategy For Intelligent Cloud Operations\" src=\"http:\/\/pages.di.unipi.it\/bacciu\/wp-content\/uploads\/sites\/12\/2024\/01\/access.jpg\" width=\"80\" alt=\"A 2-phase Strategy For Intelligent Cloud Operations\" \/><\/div><\/div><h3 class=\"tp_h3\" id=\"tp_h3_2022\">2022<\/h3><div class=\"tp_publication tp_publication_conference\"><div class=\"tp_pub_number\">48.<\/div><div class=\"tp_pub_info\"><p class=\"tp_pub_author\"> Caro, Valerio De;  Gallicchio, Claudio;  Bacciu, Davide<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('210','tp_links')\" style=\"cursor:pointer;\">Federated Adaptation of Reservoirs via Intrinsic Plasticity<\/a> <span class=\"tp_pub_type conference\">Conference<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_booktitle\">Proceedings of the 30th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning  (ESANN 2022), <\/span><span class=\"tp_pub_additional_year\">2022<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_210\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('210','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_210\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('210','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_210\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('210','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_210\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@conference{Caro2022,<br \/>\r\ntitle = {Federated Adaptation of Reservoirs via Intrinsic Plasticity},<br \/>\r\nauthor = {Valerio {De Caro} and Claudio Gallicchio and Davide Bacciu},<br \/>\r\neditor = {Michel Verleysen},<br \/>\r\nurl = {https:\/\/arxiv.org\/abs\/2206.11087, Arxiv},<br \/>\r\nyear  = {2022},<br \/>\r\ndate = {2022-10-05},<br \/>\r\nurldate = {2022-10-05},<br \/>\r\nbooktitle = {Proceedings of the 30th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning  (ESANN 2022)},<br \/>\r\nabstract = {We propose a novel algorithm for performing federated learning with Echo State Networks (ESNs) in a client-server scenario. In particular, our proposal focuses on the adaptation of reservoirs by combining Intrinsic Plasticity with Federated Averaging. The former is a gradient-based method for adapting the reservoir's non-linearity in a local and unsupervised manner, while the latter provides the framework for learning in the federated scenario. We evaluate our approach on real-world datasets from human monitoring, in comparison with the previous approach for federated ESNs existing in literature. Results show that adapting the reservoir with our algorithm provides a significant improvement on the performance of the global model. },<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {conference}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('210','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_210\" style=\"display:none;\"><div class=\"tp_abstract_entry\">We propose a novel algorithm for performing federated learning with Echo State Networks (ESNs) in a client-server scenario. In particular, our proposal focuses on the adaptation of reservoirs by combining Intrinsic Plasticity with Federated Averaging. The former is a gradient-based method for adapting the reservoir's non-linearity in a local and unsupervised manner, while the latter provides the framework for learning in the federated scenario. We evaluate our approach on real-world datasets from human monitoring, in comparison with the previous approach for federated ESNs existing in literature. Results show that adapting the reservoir with our algorithm provides a significant improvement on the performance of the global model. <\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('210','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_210\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"ai ai-arxiv\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/arxiv.org\/abs\/2206.11087\" title=\"Arxiv\" target=\"_blank\">Arxiv<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('210','tp_links')\">Close<\/a><\/p><\/div><\/div><div class=\"tp_pub_image_right\"><img decoding=\"async\" name=\"Federated Adaptation of Reservoirs via Intrinsic Plasticity\" src=\"http:\/\/pages.di.unipi.it\/bacciu\/wp-content\/uploads\/sites\/12\/2024\/01\/esann.png\" width=\"80\" alt=\"Federated Adaptation of Reservoirs via Intrinsic Plasticity\" \/><\/div><\/div><div class=\"tp_publication tp_publication_conference\"><div class=\"tp_pub_number\">49.<\/div><div class=\"tp_pub_info\"><p class=\"tp_pub_author\"> Bacciu, Davide;  Errica, Federico;  Navarin, Nicol\u00f2;  Pasa, Luca;  Zambon, Daniele<\/p><p class=\"tp_pub_title\">Deep Learning for Graphs <span class=\"tp_pub_type conference\">Conference<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_booktitle\">Proceedings of the 30th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning  (ESANN 2022), <\/span><span class=\"tp_pub_additional_year\">2022<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_214\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('214','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_214\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@conference{nokey,<br \/>\r\ntitle = {Deep Learning for Graphs},<br \/>\r\nauthor = {Davide Bacciu and Federico Errica and Nicol\u00f2 Navarin and Luca Pasa and Daniele Zambon},<br \/>\r\neditor = {Michel Verleysen},<br \/>\r\nyear  = {2022},<br \/>\r\ndate = {2022-10-05},<br \/>\r\nurldate = {2022-10-05},<br \/>\r\nbooktitle = {Proceedings of the 30th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning  (ESANN 2022)},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {conference}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('214','tp_bibtex')\">Close<\/a><\/p><\/div><\/div><div class=\"tp_pub_image_right\"><img decoding=\"async\" name=\"Deep Learning for Graphs\" src=\"http:\/\/pages.di.unipi.it\/bacciu\/wp-content\/uploads\/sites\/12\/2024\/01\/esann.png\" width=\"80\" alt=\"Deep Learning for Graphs\" \/><\/div><\/div><div class=\"tp_publication tp_publication_conference\"><div class=\"tp_pub_number\">50.<\/div><div class=\"tp_pub_info\"><p class=\"tp_pub_author\"> Valenti, Andrea;  Bacciu, Davide<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('220','tp_links')\" style=\"cursor:pointer;\">Modular Representations for Weak Disentanglement<\/a> <span class=\"tp_pub_type conference\">Conference<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_booktitle\">Proceedings of the 30th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN 2022), <\/span><span class=\"tp_pub_additional_year\">2022<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_220\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('220','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_220\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('220','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_220\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('220','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_220\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@conference{Valenti2022c,<br \/>\r\ntitle = {Modular Representations for Weak Disentanglement},<br \/>\r\nauthor = {Andrea Valenti and Davide Bacciu},<br \/>\r\neditor = {Michel Verleysen},<br \/>\r\nurl = {https:\/\/arxiv.org\/pdf\/2209.05336.pdf},<br \/>\r\nyear  = {2022},<br \/>\r\ndate = {2022-10-05},<br \/>\r\nurldate = {2022-10-05},<br \/>\r\nbooktitle = {Proceedings of the 30th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN 2022)},<br \/>\r\nabstract = {The recently introduced weakly disentangled representations proposed to relax some constraints of the previous definitions of disentanglement, in exchange for more flexibility. However, at the moment, weak disentanglement can only be achieved by increasing the amount of supervision as the number of factors of variations of the data increase. In this paper, we introduce modular representations for weak disentanglement, a novel method that allows to keep the amount of supervised information constant with respect the number of generative factors. The experiments shows that models using modular representations can increase their performance with respect to previous work without the need of additional supervision.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {conference}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('220','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_220\" style=\"display:none;\"><div class=\"tp_abstract_entry\">The recently introduced weakly disentangled representations proposed to relax some constraints of the previous definitions of disentanglement, in exchange for more flexibility. However, at the moment, weak disentanglement can only be achieved by increasing the amount of supervision as the number of factors of variations of the data increase. In this paper, we introduce modular representations for weak disentanglement, a novel method that allows to keep the amount of supervised information constant with respect the number of generative factors. The experiments shows that models using modular representations can increase their performance with respect to previous work without the need of additional supervision.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('220','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_220\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"fas fa-file-pdf\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/arxiv.org\/pdf\/2209.05336.pdf\" title=\"https:\/\/arxiv.org\/pdf\/2209.05336.pdf\" target=\"_blank\">https:\/\/arxiv.org\/pdf\/2209.05336.pdf<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('220','tp_links')\">Close<\/a><\/p><\/div><\/div><div class=\"tp_pub_image_right\"><img decoding=\"async\" name=\"Modular Representations for Weak Disentanglement\" src=\"http:\/\/pages.di.unipi.it\/bacciu\/wp-content\/uploads\/sites\/12\/2024\/01\/esann.png\" width=\"80\" alt=\"Modular Representations for Weak Disentanglement\" \/><\/div><\/div><\/div><div class=\"tablenav\"><div class=\"tablenav-pages\"><span class=\"displaying-num\">224 entries<\/span> <a class=\"page-numbers button disabled\">&laquo;<\/a> <a class=\"page-numbers button disabled\">&lsaquo;<\/a> 1 of 5 <a href=\"https:\/\/pages.di.unipi.it\/bacciu\/publications\/all\/?limit=2&amp;tgid=&amp;yr=&amp;type=&amp;usr=&amp;auth=&amp;tsr=#tppubs\" title=\"next page\" class=\"page-numbers button\">&rsaquo;<\/a> <a href=\"https:\/\/pages.di.unipi.it\/bacciu\/publications\/all\/?limit=5&amp;tgid=&amp;yr=&amp;type=&amp;usr=&amp;auth=&amp;tsr=#tppubs\" title=\"last page\" class=\"page-numbers button\">&raquo;<\/a> <\/div><\/div><\/div><\/code><\/p>\n","protected":false},"excerpt":{"rendered":"","protected":false},"author":19,"featured_media":0,"parent":13,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"footnotes":""},"class_list":["post-1410","page","type-page","status-publish","hentry"],"_links":{"self":[{"href":"https:\/\/pages.di.unipi.it\/bacciu\/wp-json\/wp\/v2\/pages\/1410","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/pages.di.unipi.it\/bacciu\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/pages.di.unipi.it\/bacciu\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/pages.di.unipi.it\/bacciu\/wp-json\/wp\/v2\/users\/19"}],"replies":[{"embeddable":true,"href":"https:\/\/pages.di.unipi.it\/bacciu\/wp-json\/wp\/v2\/comments?post=1410"}],"version-history":[{"count":2,"href":"https:\/\/pages.di.unipi.it\/bacciu\/wp-json\/wp\/v2\/pages\/1410\/revisions"}],"predecessor-version":[{"id":1443,"href":"https:\/\/pages.di.unipi.it\/bacciu\/wp-json\/wp\/v2\/pages\/1410\/revisions\/1443"}],"up":[{"embeddable":true,"href":"https:\/\/pages.di.unipi.it\/bacciu\/wp-json\/wp\/v2\/pages\/13"}],"wp:attachment":[{"href":"https:\/\/pages.di.unipi.it\/bacciu\/wp-json\/wp\/v2\/media?parent=1410"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}