Seminari - Dipartimento Informatica Seminari - Dipartimento Informatica validi dal 14.09.2025 al 14.09.2026. https://www.di.univr.it/?ent=seminario&lang=it&rss=0 Generative Geometrical Learning: Injecting structure in 3D and 4D Generation https://www.di.univr.it/?ent=seminario&lang=it&rss=0&id=6696 Relatore: Riccardo Marin; Provenienza: TUM (Technical University of Munich); Data inizio: 2025-09-19; Ora inizio: 11.00; Note orario: Aula H - CV2; Referente interno: Umberto Castellani; Riassunto: Abstract: Generative AI has increasingly extended to 3D data, offering unprecedented opportunities for the synthesis and manipulation of shapes. Such advancements, driven by large-scale datasets and substantial computational power, appear to reinforce the ldquo;bitter lessonrdquo; that scale is the key driver of progress. However, how good are these models at inferring and preserving structures in the data? Several studies indicate that even foundational vision models trained on billions of images lack a basic understanding of geometry. This is further exasperated when the aim is to synthesize 4D assets, where shapes are supposed to evolve over time, respecting physical laws. By incorporating in the learning geometric inductive biases and structure insights into the learning, it not only improves performance but also opens up new avenues for applications and research. Bio:Riccardois a researcher and interim professor at the Computer Vision Group of the Technical University of Munich (TUM), part of the Munich Center for Machine Learning (MCML), and a member of the European Laboratory for Learning and Intelligent Systems (ELLIS). Previously, he was a Marie-Curie postdoc at the University of Tubingen and a postdoc at Sapienza University of Rome.Riccardoobtained his PhD from the University of Verona, collecting the Best PhD Thesis Award in Computer Graphics from the Italian Chapter of EuroGraphics. His research focuses on 3D Geometry Processing, Spectral Shape Analysis, and, in particular, on Shape Matching and Virtual Humans applications. Fri, 19 Sep 2025 11:00:00 +0200 https://www.di.univr.it/?ent=seminario&lang=it&rss=0&id=6696 An entire solution to the Ginzburg-Landau system in three dimensions with a singular nodal set https://www.di.univr.it/?ent=seminario&lang=it&rss=0&id=6707 Relatore: dott. Nicola Picenni; Provenienza: Università degli Studi di Pisa; Data inizio: 2025-09-29; Ora inizio: 11.30; Note orario: Aula E; Referente interno: Giacomo Canevari; Riassunto: We show how to construct an entire saddle solution to the complex Ginzburg-Landau system in three dimensions, whose zero set is the union of two perpendicular, intersecting lines, and whose blow-downs concentrate on these lines with multiplicity one. This is achieved asthelimit of a sequence ofminimum problems in aspace of functions with suitable symmetries. The talk is based on a joint work with Michele Caselli (SNS). Mon, 29 Sep 2025 11:30:00 +0200 https://www.di.univr.it/?ent=seminario&lang=it&rss=0&id=6707 The quest for reliable testing and verification in machine learning security https://www.di.univr.it/?ent=seminario&lang=it&rss=0&id=6709 Relatore: Antonio Emanuele Cinà; Provenienza: DIBRIS (Università di Genova); Data inizio: 2025-10-08; Ora inizio: 16.30; Note orario: Sala Verde (presenza ed on line); Referente interno: Alessandro Farinelli; Riassunto: Abstract: Machine Learning (ML) has revolutionized numerous domains, becoming the de facto standard for complex decision-making and automation. At the same time, the widespread integration of ML into safety-critical applications is introducing significant security concerns that must be identified and patched before deployment. However, unlike traditional software, ML models do not come with formal guarantees of correctness or robustness. Instead, they rely on patterns learned from data, which makes them inherently difficult to reason about and verify. Consequently, current evaluation approaches rely heavily on empirical testing, often without consistent standards. This talk begins by introducing the foundations of ML security. We highlight key issues such as adversarial attacks that expose the fragility of current systems. We then shift focus to one of the central challenges in the field: evaluating the robustness of ML models remains largely an empirical process, with no widely accepted standards or formal verification methods in place. The seminar concludes by highlighting ongoing efforts to develop frameworks and tools that bring structure, reproducibility, and rigor to ML security evaluation. Bio: Antonio Emanuele Cinagrave; is an Assistant Professor (RTDA) at the University of Genoa, Italy. He received his Ph.D. (cum laude) in Computer Science at Ca#39; Foscari University of Venice in 2023, and he has been a Postdoc Researcher at the CISPA Helmholtz Center for Information Security, Saarbruuml;cken, Germany. His research investigates security risks arising from spurious or adversarial correlations in artificial intelligence systems, which can cause unexpected behaviors (e.g., misclassification or the generation of harmful content), as well as robustness benchmarking and the development of verification tools for trustworthy ML. More recently, he has investigated the reliability of cybersecurity systems that integrate artificial intelligence solutions, aiming to understand their behavior for improving their accuracy, safety, and security. link zoom: https://univr.zoom.us/j/88483412489 . Wed, 8 Oct 2025 16:30:00 +0200 https://www.di.univr.it/?ent=seminario&lang=it&rss=0&id=6709