Recent advances in large language models (LLMs) have led to the development of thinking language models that generate extensive internal reasoning chains before producing responses. While these models achieve improved performance, the underlying mechanisms enabling their reasoning capabilities remain poorly understood. This work studies the particular reasoning processes of thinking LLMs by analyzing DeepSeek-R1-Distill models and comparing them with non-thinking models like GPT-4o. Through a systematic experiment on 300 tasks across 10 diverse categories, we identify key behavioral patterns that characterize thinking models, including expressing their own uncertainty, coming up with examples for validating their working hypothesis, and backtracking in reasoning chains. We demonstrate that these behaviors are mediated by linear directions in the model’s activation space and can be controlled using steering vectors. By extracting and applying these vectors, we provide a method to modulate specific aspects of the model’s reasoning process, such as its tendency to backtrack or express uncertainty. Our findings not only advance the understanding of how thinking models reason but also offer practical tools for steering their reasoning processes in a controlled and interpretable manner. We validate our approach using two DeepSeek-R1-Distill models, showing consistent results across different model architectures..