Vai direttamente ai contenuti della pagina
Eventi scientifici

Do LLMs reason causally, and can we causally understand them?

12 Novembre 2025
Ore:
15:00 - 16:30
Location:
Conference Hall, Building C -102, Area Science Park, Padriciano 99, Trieste
Speaker:
Zhijing Jin, incoming Assistant Professor at the University of Toronto, and currently a research scientist at Max Planck Institute for Intelligent Systems, Germany

This talk explores the dual challenge of enhancing causal reasoning capabilities in large language models (LLMs) and developing methods to causally interpret their internal mechanisms. The first part examines how LLMs can better perform causal reasoning across diverse domains—from simple commonsense queries like “Why is the ground wet?” to complex economic inquiries about minimum wage effects—through structured formal approaches and multi-agent frameworks, including the introduction of CauSciBench for evaluating scientific-level causal reasoning and a Causal AI Scientist system that successfully reproduces causal inference from over 100 scientific papers. The second part shifts focus to causally understanding LLMs themselves, presenting latest interpretability methods including cross-layer transcoders for tracing multilingual representations, frameworks for analyzing the reasoning-memorization interplay mediated by single directions in model space, mechanisms underlying how models handle facts versus counterfactuals, causal approaches to quantifying robustness in mathematical reasoning, and methods for disentangling knowledge conflicts in vision-language models. Together, these complementary perspectives—improving LLMs’ causal reasoning abilities while causally interpreting their internal operations—advance both the practical capabilities of AI systems and our fundamental understanding of how they process and reason about causal relationships.