{"id":13,"date":"2016-03-23T10:02:53","date_gmt":"2016-03-23T09:02:53","guid":{"rendered":"http:\/\/pages.di.unipi.it\/dbacciu\/?page_id=13"},"modified":"2024-01-03T13:44:08","modified_gmt":"2024-01-03T12:44:08","slug":"publications","status":"publish","type":"page","link":"https:\/\/pages.di.unipi.it\/bacciu\/publications\/","title":{"rendered":"Publications"},"content":{"rendered":"\n<p>Here you can find a consolidated (a.k.a. slowly updated) list of my publications. A frequently updated (and possibly noisy) list of works is available on my <a href=\"https:\/\/scholar.google.it\/citations?user=1d5n2WkAAAAJ&amp;hl\" target=\"_blank\" rel=\"noreferrer noopener\">Google Scholar profile<\/a>.<\/p>\n\n\n\n<p>Please find below a short list of highlight publications for my recent activity.<\/p>\n\n\n<p style=\"text-align: left\"><code><div class=\"teachpress_pub_list\"><form name=\"tppublistform\" method=\"get\"><a name=\"tppubs\" id=\"tppubs\"><\/a><\/form><div class=\"teachpress_publication_list\"><div class=\"tp_publication tp_publication_conference\"><div class=\"tp_pub_image_left\"><img decoding=\"async\" name=\"Dual Algorithmic Reasoning\" src=\"http:\/\/pages.di.unipi.it\/bacciu\/wp-content\/uploads\/sites\/12\/2024\/01\/iclr.png\" width=\"60\" alt=\"Dual Algorithmic Reasoning\" \/><\/div><div class=\"tp_pub_info\"><p class=\"tp_pub_author\"> Numeroso, Danilo;  Bacciu, Davide;  Veli\u010dkovi\u0107, Petar<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('226','tp_links')\" style=\"cursor:pointer;\">Dual Algorithmic Reasoning<\/a> <span class=\"tp_pub_type conference\">Conference<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_booktitle\">Proceedings of the Eleventh International Conference on Learning Representations (ICLR 2023), <\/span><span class=\"tp_pub_additional_year\">2023<\/span><span class=\"tp_pub_additional_note\">, (Notable Spotlight paper)<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_226\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('226','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_226\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('226','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_226\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('226','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_226\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@conference{Numeroso2023,<br \/>\r\ntitle = {Dual Algorithmic Reasoning},<br \/>\r\nauthor = {Danilo Numeroso and Davide Bacciu and Petar Veli\u010dkovi\u0107},<br \/>\r\nurl = {https:\/\/openreview.net\/pdf?id=hhvkdRdWt1F},<br \/>\r\nyear  = {2023},<br \/>\r\ndate = {2023-05-01},<br \/>\r\nurldate = {2023-05-01},<br \/>\r\nbooktitle = {Proceedings of the Eleventh International Conference on Learning Representations (ICLR 2023)},<br \/>\r\nabstract = {Neural Algorithmic Reasoning is an emerging area of machine learning which seeks to infuse algorithmic computation in neural networks, typically by training neural models to approximate steps of classical algorithms. In this context, much of the current work has focused on learning reachability and shortest path graph algorithms, showing that joint learning on similar algorithms is beneficial for generalisation. However, when targeting more complex problems, such \"similar\" algorithms become more difficult to find. Here, we propose to learn algorithms by exploiting duality of the underlying algorithmic problem. Many algorithms solve optimisation problems. We demonstrate that simultaneously learning the dual definition of these optimisation problems in algorithmic learning allows for better learning and qualitatively better solutions. Specifically, we exploit the max-flow min-cut theorem to simultaneously learn these two algorithms over synthetically generated graphs, demonstrating the effectiveness of the proposed approach. We then validate the real-world utility of our dual algorithmic reasoner by deploying it on a challenging brain vessel classification task, which likely depends on the vessels\u2019 flow properties. We demonstrate a clear performance gain when using our model within such a context, and empirically show that learning the max-flow and min-cut algorithms together is critical for achieving such a result.},<br \/>\r\nnote = {Notable Spotlight paper},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {conference}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('226','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_226\" style=\"display:none;\"><div class=\"tp_abstract_entry\">Neural Algorithmic Reasoning is an emerging area of machine learning which seeks to infuse algorithmic computation in neural networks, typically by training neural models to approximate steps of classical algorithms. In this context, much of the current work has focused on learning reachability and shortest path graph algorithms, showing that joint learning on similar algorithms is beneficial for generalisation. However, when targeting more complex problems, such &quot;similar&quot; algorithms become more difficult to find. Here, we propose to learn algorithms by exploiting duality of the underlying algorithmic problem. Many algorithms solve optimisation problems. We demonstrate that simultaneously learning the dual definition of these optimisation problems in algorithmic learning allows for better learning and qualitatively better solutions. Specifically, we exploit the max-flow min-cut theorem to simultaneously learn these two algorithms over synthetically generated graphs, demonstrating the effectiveness of the proposed approach. We then validate the real-world utility of our dual algorithmic reasoner by deploying it on a challenging brain vessel classification task, which likely depends on the vessels\u2019 flow properties. We demonstrate a clear performance gain when using our model within such a context, and empirically show that learning the max-flow and min-cut algorithms together is critical for achieving such a result.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('226','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_226\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"fas fa-globe\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/openreview.net\/pdf?id=hhvkdRdWt1F\" title=\"https:\/\/openreview.net\/pdf?id=hhvkdRdWt1F\" target=\"_blank\">https:\/\/openreview.net\/pdf?id=hhvkdRdWt1F<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('226','tp_links')\">Close<\/a><\/p><\/div><\/div><\/div><div class=\"tp_publication tp_publication_book\"><div class=\"tp_pub_image_left\"><img decoding=\"async\" name=\"Deep Learning in Biology and Medicine\" src=\"http:\/\/pages.di.unipi.it\/bacciu\/wp-content\/uploads\/sites\/12\/2022\/01\/book.jpg\" width=\"60\" alt=\"Deep Learning in Biology and Medicine\" \/><\/div><div class=\"tp_pub_info\"><p class=\"tp_pub_author\"> Bacciu, Davide;  Lisboa, Paulo J. G.;  Vellido, Alfredo<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('192','tp_links')\" style=\"cursor:pointer;\">Deep Learning in Biology and Medicine<\/a> <span class=\"tp_pub_type book\">Book<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_publisher\">World Scientific Publisher, <\/span><span class=\"tp_pub_additional_year\">2022<\/span>, <span class=\"tp_pub_additional_isbn\">ISBN: 978-1-80061-093-4<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_192\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('192','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_192\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('192','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_192\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('192','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_192\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@book{BacciuBook2022,<br \/>\r\ntitle = {Deep Learning in Biology and Medicine},<br \/>\r\nauthor = {Davide Bacciu and Paulo J. G. Lisboa and Alfredo Vellido},<br \/>\r\ndoi = {doi.org\/10.1142\/q0322 },<br \/>\r\nisbn = {978-1-80061-093-4},<br \/>\r\nyear  = {2022},<br \/>\r\ndate = {2022-02-01},<br \/>\r\nurldate = {2022-02-01},<br \/>\r\npublisher = {World Scientific Publisher},<br \/>\r\nabstract = {Biology, medicine and biochemistry have become data-centric fields for which Deep Learning methods are delivering groundbreaking results. Addressing high impact challenges, Deep Learning in Biology and Medicine provides an accessible and organic collection of Deep Learning essays on bioinformatics and medicine. It caters for a wide readership, ranging from machine learning practitioners and data scientists seeking methodological knowledge to address biomedical applications, to life science specialists in search of a gentle reference for advanced data analytics.<br \/>\r\nWith contributions from internationally renowned experts, the book covers foundational methodologies in a wide spectrum of life sciences applications, including electronic health record processing, diagnostic imaging, text processing, as well as omics-data processing. This survey of consolidated problems is complemented by a selection of advanced applications, including cheminformatics and biomedical interaction network analysis. A modern and mindful approach to the use of data-driven methodologies in the life sciences also requires careful consideration of the associated societal, ethical, legal and transparency challenges, which are covered in the concluding chapters of this book.},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {book}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('192','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_192\" style=\"display:none;\"><div class=\"tp_abstract_entry\">Biology, medicine and biochemistry have become data-centric fields for which Deep Learning methods are delivering groundbreaking results. Addressing high impact challenges, Deep Learning in Biology and Medicine provides an accessible and organic collection of Deep Learning essays on bioinformatics and medicine. It caters for a wide readership, ranging from machine learning practitioners and data scientists seeking methodological knowledge to address biomedical applications, to life science specialists in search of a gentle reference for advanced data analytics.<br \/>\r\nWith contributions from internationally renowned experts, the book covers foundational methodologies in a wide spectrum of life sciences applications, including electronic health record processing, diagnostic imaging, text processing, as well as omics-data processing. This survey of consolidated problems is complemented by a selection of advanced applications, including cheminformatics and biomedical interaction network analysis. A modern and mindful approach to the use of data-driven methodologies in the life sciences also requires careful consideration of the associated societal, ethical, legal and transparency challenges, which are covered in the concluding chapters of this book.<\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('192','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_192\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/doi.org\/10.1142\/q0322 \" title=\"Follow DOI:doi.org\/10.1142\/q0322 \" target=\"_blank\">doi:doi.org\/10.1142\/q0322 <\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('192','tp_links')\">Close<\/a><\/p><\/div><\/div><\/div><div class=\"tp_publication tp_publication_workshop\"><div class=\"tp_pub_image_left\"><img decoding=\"async\" name=\"Avalanche: an End-to-End Library for Continual Learning\" src=\"http:\/\/pages.di.unipi.it\/bacciu\/wp-content\/uploads\/sites\/12\/2024\/01\/cvpr.jpg\" width=\"60\" alt=\"Avalanche: an End-to-End Library for Continual Learning\" \/><\/div><div class=\"tp_pub_info\"><p class=\"tp_pub_author\"> Lomonaco, Vincenzo;  Pellegrini, Lorenzo;  Cossu, Andrea;  Carta, Antonio;  Graffieti, Gabriele;  Hayes, Tyler L;  Lange, Matthias De;  Masana, Marc;  Pomponi, Jary; van de Ven, Gido;  Mundt, Martin;  She, Qi;  Cooper, Keiland;  Forest, Jeremy;  Belouadah, Eden;  Calderara, Simone;  Parisi, German I;  Cuzzolin, Fabio;  Tolias, Andreas;  Scardapane, Simone;  Antiga, Luca;  Amhad, Subutai;  Popescu, Adrian;  Kanan, Christopher; van de Weijer, Joost;  Tuytelaars, Tinne;  Bacciu, Davide;  Maltoni, Davide<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('169','tp_links')\" style=\"cursor:pointer;\">Avalanche: an End-to-End Library for Continual Learning<\/a> <span class=\"tp_pub_type workshop\">Workshop<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_booktitle\">Proceedings of the CVPR 2021 Workshop on Continual Learning , <\/span><span class=\"tp_pub_additional_publisher\">IEEE, <\/span><span class=\"tp_pub_additional_year\">2021<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_resource_link\"><a id=\"tp_links_sh_169\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('169','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_169\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('169','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_169\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@workshop{lomonaco2021avalanche,<br \/>\r\ntitle = {Avalanche: an End-to-End Library for Continual Learning},<br \/>\r\nauthor = {Vincenzo Lomonaco and Lorenzo Pellegrini and Andrea Cossu and Antonio Carta and Gabriele Graffieti and Tyler L Hayes and Matthias De Lange and Marc Masana and Jary Pomponi and Gido van de Ven and Martin Mundt and Qi She and Keiland Cooper and Jeremy Forest and Eden Belouadah and Simone Calderara and German I Parisi and Fabio Cuzzolin and Andreas Tolias and Simone Scardapane and Luca Antiga and Subutai Amhad and Adrian Popescu and Christopher Kanan and Joost van de Weijer and Tinne Tuytelaars and Davide Bacciu and Davide Maltoni},<br \/>\r\nurl = {https:\/\/arxiv.org\/abs\/2104.00405, Arxiv},<br \/>\r\nyear  = {2021},<br \/>\r\ndate = {2021-06-19},<br \/>\r\nurldate = {2021-06-19},<br \/>\r\nbooktitle = {Proceedings of the CVPR 2021 Workshop on Continual Learning },<br \/>\r\npages = {3600-3610},<br \/>\r\npublisher = {IEEE},<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {workshop}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('169','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_169\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"ai ai-arxiv\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/arxiv.org\/abs\/2104.00405\" title=\"Arxiv\" target=\"_blank\">Arxiv<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('169','tp_links')\">Close<\/a><\/p><\/div><\/div><\/div><div class=\"tp_publication tp_publication_article\"><div class=\"tp_pub_image_left\"><img decoding=\"async\" name=\"A Gentle Introduction to Deep Learning for Graphs\" src=\"http:\/\/pages.di.unipi.it\/bacciu\/wp-content\/uploads\/sites\/12\/2020\/06\/08936080.jpg\" width=\"60\" alt=\"A Gentle Introduction to Deep Learning for Graphs\" \/><\/div><div class=\"tp_pub_info\"><p class=\"tp_pub_author\"> Bacciu, Davide;  Errica, Federico;  Micheli, Alessio;  Podda, Marco<\/p><p class=\"tp_pub_title\"><a class=\"tp_title_link\" onclick=\"teachpress_pub_showhide('148','tp_links')\" style=\"cursor:pointer;\">A Gentle Introduction to Deep Learning for Graphs<\/a> <span class=\"tp_pub_type article\">Journal Article<\/span> <\/p><p class=\"tp_pub_additional\"><span class=\"tp_pub_additional_in\">In: <\/span><span class=\"tp_pub_additional_journal\">Neural Networks, <\/span><span class=\"tp_pub_additional_volume\">vol. 129, <\/span><span class=\"tp_pub_additional_pages\">pp. 203-221, <\/span><span class=\"tp_pub_additional_year\">2020<\/span>.<\/p><p class=\"tp_pub_menu\"><span class=\"tp_abstract_link\"><a id=\"tp_abstract_sh_148\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('148','tp_abstract')\" title=\"Show abstract\" style=\"cursor:pointer;\">Abstract<\/a><\/span> | <span class=\"tp_resource_link\"><a id=\"tp_links_sh_148\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('148','tp_links')\" title=\"Show links and resources\" style=\"cursor:pointer;\">Links<\/a><\/span> | <span class=\"tp_bibtex_link\"><a id=\"tp_bibtex_sh_148\" class=\"tp_show\" onclick=\"teachpress_pub_showhide('148','tp_bibtex')\" title=\"Show BibTeX entry\" style=\"cursor:pointer;\">BibTeX<\/a><\/span><\/p><div class=\"tp_bibtex\" id=\"tp_bibtex_148\" style=\"display:none;\"><div class=\"tp_bibtex_entry\"><pre>@article{gentleGraphs2020,<br \/>\r\ntitle = {A Gentle Introduction to Deep Learning for Graphs},<br \/>\r\nauthor = {Davide Bacciu and Federico Errica and Alessio Micheli and Marco Podda},<br \/>\r\nurl = {https:\/\/arxiv.org\/abs\/1912.12693, Arxiv<br \/>\r\nhttps:\/\/doi.org\/10.1016\/j.neunet.2020.06.006, Original Paper},<br \/>\r\ndoi = {10.1016\/j.neunet.2020.06.006},<br \/>\r\nyear  = {2020},<br \/>\r\ndate = {2020-09-01},<br \/>\r\nurldate = {2020-09-01},<br \/>\r\njournal = {Neural Networks},<br \/>\r\nvolume = {129},<br \/>\r\npages = {203-221},<br \/>\r\npublisher = {Elsevier},<br \/>\r\nabstract = {The adaptive processing of graph data is a long-standing research topic which has been lately consolidated as a theme of major interest in the deep learning community. The snap increase in the amount and breadth of related research has come at the price of little systematization of knowledge and attention to earlier literature. This work is designed as a tutorial introduction to the field of deep learning for graphs. It favours a consistent and progressive introduction of the main concepts and architectural aspects over an exposition of the most recent literature, for which the reader is referred to available surveys. The paper takes a top-down view to the problem, introducing a generalized formulation of graph representation learning based on a local and iterative approach to structured information processing. It introduces the basic building blocks that can be combined to design novel and effective neural models for graphs. The methodological exposition is complemented by a discussion of interesting research challenges and applications in the field. },<br \/>\r\nkeywords = {},<br \/>\r\npubstate = {published},<br \/>\r\ntppubtype = {article}<br \/>\r\n}<br \/>\r\n<\/pre><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('148','tp_bibtex')\">Close<\/a><\/p><\/div><div class=\"tp_abstract\" id=\"tp_abstract_148\" style=\"display:none;\"><div class=\"tp_abstract_entry\">The adaptive processing of graph data is a long-standing research topic which has been lately consolidated as a theme of major interest in the deep learning community. The snap increase in the amount and breadth of related research has come at the price of little systematization of knowledge and attention to earlier literature. This work is designed as a tutorial introduction to the field of deep learning for graphs. It favours a consistent and progressive introduction of the main concepts and architectural aspects over an exposition of the most recent literature, for which the reader is referred to available surveys. The paper takes a top-down view to the problem, introducing a generalized formulation of graph representation learning based on a local and iterative approach to structured information processing. It introduces the basic building blocks that can be combined to design novel and effective neural models for graphs. The methodological exposition is complemented by a discussion of interesting research challenges and applications in the field. <\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('148','tp_abstract')\">Close<\/a><\/p><\/div><div class=\"tp_links\" id=\"tp_links_148\" style=\"display:none;\"><div class=\"tp_links_entry\"><ul class=\"tp_pub_list\"><li><i class=\"ai ai-arxiv\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/arxiv.org\/abs\/1912.12693\" title=\"Arxiv\" target=\"_blank\">Arxiv<\/a><\/li><li><i class=\"fas fa-globe\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/doi.org\/10.1016\/j.neunet.2020.06.006\" title=\"Original Paper\" target=\"_blank\">Original Paper<\/a><\/li><li><i class=\"ai ai-doi\"><\/i><a class=\"tp_pub_list\" href=\"https:\/\/dx.doi.org\/10.1016\/j.neunet.2020.06.006\" title=\"Follow DOI:10.1016\/j.neunet.2020.06.006\" target=\"_blank\">doi:10.1016\/j.neunet.2020.06.006<\/a><\/li><\/ul><\/div><p class=\"tp_close_menu\"><a class=\"tp_close\" onclick=\"teachpress_pub_showhide('148','tp_links')\">Close<\/a><\/p><\/div><\/div><\/div><\/div><\/div><\/code><\/p>","protected":false},"excerpt":{"rendered":"<p>Here you can find a consolidated (a.k.a. slowly updated) list of my publications. A frequently updated (and possibly noisy) list of works is available on my Google Scholar profile. Please find below a short list of highlight publications for my recent activity.<\/p>\n","protected":false},"author":19,"featured_media":0,"parent":0,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"footnotes":""},"class_list":["post-13","page","type-page","status-publish","hentry"],"_links":{"self":[{"href":"https:\/\/pages.di.unipi.it\/bacciu\/wp-json\/wp\/v2\/pages\/13","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/pages.di.unipi.it\/bacciu\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/pages.di.unipi.it\/bacciu\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/pages.di.unipi.it\/bacciu\/wp-json\/wp\/v2\/users\/19"}],"replies":[{"embeddable":true,"href":"https:\/\/pages.di.unipi.it\/bacciu\/wp-json\/wp\/v2\/comments?post=13"}],"version-history":[{"count":30,"href":"https:\/\/pages.di.unipi.it\/bacciu\/wp-json\/wp\/v2\/pages\/13\/revisions"}],"predecessor-version":[{"id":1522,"href":"https:\/\/pages.di.unipi.it\/bacciu\/wp-json\/wp\/v2\/pages\/13\/revisions\/1522"}],"wp:attachment":[{"href":"https:\/\/pages.di.unipi.it\/bacciu\/wp-json\/wp\/v2\/media?parent=13"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}