LLM Causal Spatial Representations
Highlight
- Category: DS/ML
- Year: 2023
- Keywords: Large Language Model (LLM), Natural Language Processing (NLP), Python
Description
Recent work found high mutual information between the learned representations of large language models (LLMs) and the geospatial property of its input, hinting at an emergent internal model of space. However, whether this internal space model has any causal effects on the LLMs' behaviors was not answered by that work, leading to criticism of these findings as mere statistical correlation. Our study focused on uncovering the causality of the spatial representations in LLMs. In particular, we discovered the potential spatial representations in DeBERTa, GPT-Neo using representational similarity analysis and linear and non-linear probing. Our casual intervention experiments showed that the spatial representations influenced the model's performance on next word prediction and a downstream task that relies on geospatial information. Our experiments suggested that the LLMs learn and use an internal model of space in solving geospatial related tasks.