To enhance the reasoning capabilities of smaller-sized language models, employ a system of thinking that incorporates mental models, structured Chain-of-Thought processes, and thoughtful reflection before responding to user queries. Problem : smaller-sized transformer models exhibit inferior reasoning capabilities compared to their larger counterparts, whose advanced reasoning abilities stem from broader connection networks that facilitate cross-domain inference. Solution : Two-Step Approach: Finetuning : Commence by fine-tuning the Llama 3.1, a smaller-sized transformer model with 8 billion parameters, on an enhanced reasoning dataset to bolster its cognitive capabilities. Revelation of Internal Processes: Subsequently, leverage a system of thinking model guidance techniques (Think, Plan, Reasoning and Reflection) to unveil the model’s internal thought processes and the rationales underlying its processing mechanisms. Available System Thinking Models Understand,, Think, Plan, Reasoning and Reflection Chain-of-Thoughts Thinking Fast and Slow Critical Thinking Iceberg Mental Model Second Order Thinking