Model Control through Lightweight Activation Steering for Vision Language Models

TR Number

Date

2025-06-23

Journal Title

Journal ISSN

Volume Title

Publisher

Virginia Tech

Abstract

This work introduces SteerVLM, a lightweight steering module designed to guide Vision- Language Models (VLMs) towards outputs that better adhere to desired instructions. Our approach learns from the latent embeddings of paired prompts encoding target and converse behaviors to dynamically adjust activations connecting the language modality with image context. This provides fine-grained, inference-time control over complex output semantics without modifying model weights while preserving performance on off-target tasks. Our steering module requires learning parameters equal to 0.14% of the original VLM's size. Additionally, our steering module gains model control via dimension-wise activation mod- ulation and adaptive layer-wise steering without requiring pre-extracted static vectors or manual tuning of intervention points. Furthermore, we introduce VNIA (Visual Narrative Intent Alignment), a multimodal dataset specifically created to facilitate the development and evaluation of VLM steering techniques. Our method outperforms existing intervention techniques on steering and hallucination mitigation benchmarks for VLMs and proposes a robust solution for multimodal model control through activation engineering.

Description

Keywords

Vision Language Models, Activation Engineering, Steering, Large Language Models, Latent Space Arithmetic, Multimodal.

Citation

Collections