Pubblicazioni
2025

Conference and Workshop Papers
[c3]
Situation Calculus Temporally Lifted Abstractions for Generalized Planning
To appear in Proc. of the 39th Annual AAAI Conference on Artificial Intelligence (AAAI 2025), Philadelphia, PA, USA.
[]
[paper]
We present a new formal framework for generalized planning (GP) based on the situation calculus extended with LTL constraints. The GP problem is specified by a first-order basic action theory whose models are the problem instances. This low-level theory is then abstracted into a high-level propositional nondeterministic basic action theory with a single model. A refinement mapping relates the two theories. LTL formulas are used to specify the temporally extended goals as well as assumed trace constraints. If all LTL trace constraints hold at the low level and the high-level model can simulate all the low-level models with respect to the mapping, we say that we have a temporally lifted abstraction. We prove that if we have such an abstraction and the agent has a strategy to achieve a LTL goal under some trace constraints at the abstract level, then there exists a refinement of the strategy to achieve the refinement of the goal at the concrete level. We use LTL synthesis to generate the strategy at the abstract level. We illustrate our approach by synthesizing a program that solves a data structure manipulation problem.
2024

Conference and Workshop Papers
[c2]
Situation Calculus Temporally Lifted Abstractions for Generalized Planning - Extended Abstract
Workshop on Symbolic and Neuro-Symbolic Architectures for Intelligent Robotics Technology (SYNERGY 2024), co-located with KR 2024, Hanoi, Vietnam.
[]
[paper]
[]
We present a new formal framework for generalized planning (GP) based on the situation calculus extended with LTL constraints. The GP problem is specified by a first-order basic action theory whose models are the problem instances. This low-level theory is then abstracted into a high-level propositional nondeterministic basic action theory with a single model. A refinement mapping relates the two theories. LTL formulas are used to specify the temporally extended goals as well as assumed trace constraints. If all LTL trace constraints hold at the low level and the high-level model can simulate all the low-level models with respect to the mapping, we say that we have a temporally lifted abstraction. We prove that if we have such an abstraction and the agent has a strategy to achieve a LTL goal under some trace constraints at the abstract level, then there exists a refinement of the strategy to achieve the refinement of the goal at the concrete level. We use LTL synthesis to generate the strategy at the abstract level. We illustrate our approach by synthesizing a program that solves a data structure manipulation problem.
@inproceedings{DBLP:conf/kodis/GiacomoLM24,
author = {Giuseppe De Giacomo and Yves Lesp{\'{e}}rance and Matteo Mancanelli},
editor = {Luc{\'{\i}}a G{\'{o}}mez {\'{A}}lvarez and Jonas Haldimann and Jesse Heyninck and Srdjan Vesic and Francesco Fabiano and Marcello Balduccini},
title = {Situation Calculus Temporally Lifted Abstractions for Generalized Planning - Extended Abstract},
booktitle = {Joint Proceedings of the Joint Workshop on Knowledge Diversity and Cognitive Aspects of {KR} and the Workshop on Symbolic and Neuro-Symbolic Architectures for Intelligent Robotics Technology (KoDis-CAKR-SYNERGY 2024) co-located with the 21st International Conference on Principles of Knowledge Representation and Reasoning {(KR} 2024), Hanoi, Vietnam, November 2-8, 2024},
series = {{CEUR} Workshop Proceedings},
volume = {3876},
publisher = {CEUR-WS.org},
year = {2024},
}
author = {Giuseppe De Giacomo and Yves Lesp{\'{e}}rance and Matteo Mancanelli},
editor = {Luc{\'{\i}}a G{\'{o}}mez {\'{A}}lvarez and Jonas Haldimann and Jesse Heyninck and Srdjan Vesic and Francesco Fabiano and Marcello Balduccini},
title = {Situation Calculus Temporally Lifted Abstractions for Generalized Planning - Extended Abstract},
booktitle = {Joint Proceedings of the Joint Workshop on Knowledge Diversity and Cognitive Aspects of {KR} and the Workshop on Symbolic and Neuro-Symbolic Architectures for Intelligent Robotics Technology (KoDis-CAKR-SYNERGY 2024) co-located with the 21st International Conference on Principles of Knowledge Representation and Reasoning {(KR} 2024), Hanoi, Vietnam, November 2-8, 2024},
series = {{CEUR} Workshop Proceedings},
volume = {3876},
publisher = {CEUR-WS.org},
year = {2024},
}
2023

Conference and Workshop Papers
[c1]
PHYDI: Initializing Parameterized Hypercomplex Neural Networks as Identity Functions
IEEE International Workshop on Machine Learning for Signal Processing (MLSP 2023), Rome, Italy.
[]
[doi]
[paper]
[poster]
[codice]
[]
Top 5% Outstanding Paper
Neural models based on hypercomplex algebra systems are growing and prolificating for a plethora of applications, ranging from computer vision to natural language processing. Hand in hand with their adoption, parameterized hypercomplex neural networks (PHNNs) are growing in size and no techniques have been adopted so far to control their convergence at a large scale. In this paper, we study PHNNs convergence and propose parameterized hypercomplex identity initialization (PHYDI), a method to improve their convergence at different scales, leading to more robust performance when the number of layers scales up, while also reaching the same performance with fewer iterations. We show the effectiveness of this approach in different benchmarks and with common PHNNs with ResNets- and Transformer-based architecture.
@inproceedings{mancanelli2023MLSP,
author={Mancanelli, Matteo and Grassucci, Eleonora and Uncini, Aurelio and Comminiello, Danilo},
booktitle={2023 IEEE 33rd International Workshop on Machine Learning for Signal Processing (MLSP)},
title={{PHYDI: I}nitializing Parameterized Hypercomplex Neural Networks as Identity Functions},
year={2023},
organization={IEEE},
pages={1--6},
doi={10.1109/MLSP55844.2023.10285926}
}
author={Mancanelli, Matteo and Grassucci, Eleonora and Uncini, Aurelio and Comminiello, Danilo},
booktitle={2023 IEEE 33rd International Workshop on Machine Learning for Signal Processing (MLSP)},
title={{PHYDI: I}nitializing Parameterized Hypercomplex Neural Networks as Identity Functions},
year={2023},
organization={IEEE},
pages={1--6},
doi={10.1109/MLSP55844.2023.10285926}
}