Pubblicazioni
2024
Conference and Workshop Papers
[c2]
Situation Calculus Temporally Lifted Abstractions for Program Synthesis (Extended Abstract)
Workshop on Highlights of Reasoning about Actions, Planning and Reactive Synthesis, Oct. 2024
[]
[paper]
We address a program synthesis task where one wants to provide a description of the behavior of common data structures and automatically synthesize a concrete program from a proper abstraction. Our framework is based on nondeterministic situation calculus, extended with LTL trace constraints. We propose a notion of temporally lifted abstraction to address the scenario in which we have a single high-level action theory/model with incomplete information and nondeterministic actions and a concrete action theory with several models and complete information. LTL formulas are used to specify the temporally extended goals as well as assumed trace constraints. We show that if the agent has a strategy to achieve a goal under trace constraints at the high level, then there is a refinement of the strategy to achieve the refinement of the goal at the low level.
2023
Conference and Workshop Papers
[c1]
PHYDI: Initializing Parameterized Hypercomplex Neural Networks as Identity Functions
IEEE International Workshop on Machine Learning for Signal Processing (MLSP 2023), Sept. 2023
[]
[doi]
[paper]
[poster]
[codice]
[]
Top 5% Outstanding Paper
Neural models based on hypercomplex algebra systems are growing and prolificating for a plethora of applications, ranging from computer vision to natural language processing. Hand in hand with their adoption, parameterized hypercomplex neural networks (PHNNs) are growing in size and no techniques have been adopted so far to control their convergence at a large scale. In this paper, we study PHNNs convergence and propose parameterized hypercomplex identity initialization (PHYDI), a method to improve their convergence at different scales, leading to more robust performance when the number of layers scales up, while also reaching the same performance with fewer iterations. We show the effectiveness of this approach in different benchmarks and with common PHNNs with ResNets- and Transformer-based architecture.
@inproceedings{mancanelli2023MLSP,
author={Mancanelli, Matteo and Grassucci, Eleonora and Uncini, Aurelio and Comminiello, Danilo},
booktitle={2023 IEEE 33rd International Workshop on Machine Learning for Signal Processing (MLSP)},
title={{PHYDI: I}nitializing Parameterized Hypercomplex Neural Networks as Identity Functions},
year={2023},
organization={IEEE},
pages={1--6},
doi={10.1109/MLSP55844.2023.10285926}
}
author={Mancanelli, Matteo and Grassucci, Eleonora and Uncini, Aurelio and Comminiello, Danilo},
booktitle={2023 IEEE 33rd International Workshop on Machine Learning for Signal Processing (MLSP)},
title={{PHYDI: I}nitializing Parameterized Hypercomplex Neural Networks as Identity Functions},
year={2023},
organization={IEEE},
pages={1--6},
doi={10.1109/MLSP55844.2023.10285926}
}