Here you can find a consolidated (a.k.a. slowly updated) list of my publications. A frequently updated (and possibly noisy) list of works is available on my Google Scholar profile.
Please find below a short list of highlight publications for my recent activity.
Rosasco, Andrea; Carta, Antonio; Cossu, Andrea; Lomonaco, Vincenzo; Bacciu, Davide
Distilled Replay: Overcoming Forgetting through Synthetic Samples Workshop
IJCAI 2021 workshop on continual semi-supervised learning (CSSL 2021) , 2021.
@workshop{Rosasco2021,
title = {Distilled Replay: Overcoming Forgetting through Synthetic Samples},
author = {Andrea Rosasco and Antonio Carta and Andrea Cossu and Vincenzo Lomonaco and Davide Bacciu},
url = {https://arxiv.org/abs/2103.15851, Arxiv},
year = {2021},
date = {2021-08-19},
urldate = {2021-08-19},
booktitle = {IJCAI 2021 workshop on continual semi-supervised learning (CSSL 2021) },
abstract = {Replay strategies are Continual Learning techniques which mitigate catastrophic forgetting by keeping a buffer of patterns from previous experience, which are interleaved with new data during training. The amount of patterns stored in the buffer is a critical parameter which largely influences the final performance and the memory footprint of the approach. This work introduces Distilled Replay, a novel replay strategy for Continual Learning which is able to mitigate forgetting by keeping a very small buffer (up to pattern per class) of highly informative samples. Distilled Replay builds the buffer through a distillation process which compresses a large dataset into a tiny set of informative examples. We show the effectiveness of our Distilled Replay against naive replay, which randomly samples patterns from the dataset, on four popular Continual Learning benchmarks.},
keywords = {},
pubstate = {published},
tppubtype = {workshop}
}
Replay strategies are Continual Learning techniques which mitigate catastrophic forgetting by keeping a buffer of patterns from previous experience, which are interleaved with new data during training. The amount of patterns stored in the buffer is a critical parameter which largely influences the final performance and the memory footprint of the approach. This work introduces Distilled Replay, a novel replay strategy for Continual Learning which is able to mitigate forgetting by keeping a very small buffer (up to pattern per class) of highly informative samples. Distilled Replay builds the buffer through a distillation process which compresses a large dataset into a tiny set of informative examples. We show the effectiveness of our Distilled Replay against naive replay, which randomly samples patterns from the dataset, on four popular Continual Learning benchmarks.