Vai direttamente ai contenuti della pagina
Pubblicazioni scientifiche

Competition of Mechanisms: Tracing How Language Models Handle Facts and Counterfactuals

LADE

Abstract

Interpretability research aims to bridge the gap between empirical success and our scientific understanding of the inner workings of large language models (LLMs). However, most existing research focuses on analyzing a single mechanism, such as how models copy or recall factual knowledge. In this work, we propose a formulation of competition of mechanisms, which focuses on the interplay of multiple mechanisms instead of individual mechanisms and traces how one of them becomes dominant in the final prediction. We uncover how and where mechanisms compete within LLMs using two interpretability methods: logit inspection and attention modification. Our findings show traces of the mechanisms and their competition across various model components and reveal attention positions that effectively control the strength of certain mechanisms.

Autori

Francesco Ortu, Zhijing Jin, Diego Doimo, Mrinmaya Schaan, Alberto Cazzaniga, Bernhard Scholkopf

Rivista

Accepted at the Annual Meeting of the Association for Computational Linquistics (ACL), Arxiv preprint: 2402.11655

Data di pubblicazione

06/06/2024

Consulta la pubblicazione