Judging Category
Basic or Experimental Research
Student Rank
Graduate
College
Business
Faculty Sponsor
Dr Farhad Moeeni moeeni@astate.edu Dr Mathew Hill mdhill@astate.edu
Description
This study explores the application of Reinforcement Learning (RL) techniques to improve personal budget management and financial decision-making. A traditional rule-based budgeting model is first developed as a baseline approach, where income is allocated to expenses, savings, and debt payments using fixed financial ratios and predefined rules. Although this approach provides a structured framework for financial planning, it lacks adaptability and cannot dynamically optimize decisions when financial conditions change.To address these limitations, two reinforcement learning models—Q-Learning and Deep Q-Network (DQN) are implemented and compared with the traditional budgeting model. In the Q-Learning model, an agent interacts with simulated financial states such as income, expenses, savings rate, and debt payments. The agent learns optimal actions through iterative updates of Q-values using a reward function designed to encourage higher savings, responsible debt repayment, and reduced overspending.To further enhance scalability, a Deep Q-Network (DQN) model is introduced. Unlike traditional Q-Learning, which relies on a Q-table, DQN uses a neural network to approximate Q-values for different actions. This allows the model to handle more complex financial state spaces and learn adaptive budgeting policies that can potentially improve long-term financial stability.
Disciplines
Business | Business Analytics | Business Intelligence
License

This work is licensed under a Creative Commons Attribution-NonCommercial-No Derivative Works 4.0 International License.
Recommended Citation
Yaqub, Danish, "Reinforcement Learning Approaches for Intelligent Budget Management: A Comparison of Traditional Budget model with Q-Learning, and Deep Q-Network Models" (2026). Create@State. 20.
https://arch.astate.edu/evn-createstate/2026/posters/20
Included in
Reinforcement Learning Approaches for Intelligent Budget Management: A Comparison of Traditional Budget model with Q-Learning, and Deep Q-Network Models
This study explores the application of Reinforcement Learning (RL) techniques to improve personal budget management and financial decision-making. A traditional rule-based budgeting model is first developed as a baseline approach, where income is allocated to expenses, savings, and debt payments using fixed financial ratios and predefined rules. Although this approach provides a structured framework for financial planning, it lacks adaptability and cannot dynamically optimize decisions when financial conditions change.To address these limitations, two reinforcement learning models—Q-Learning and Deep Q-Network (DQN) are implemented and compared with the traditional budgeting model. In the Q-Learning model, an agent interacts with simulated financial states such as income, expenses, savings rate, and debt payments. The agent learns optimal actions through iterative updates of Q-values using a reward function designed to encourage higher savings, responsible debt repayment, and reduced overspending.To further enhance scalability, a Deep Q-Network (DQN) model is introduced. Unlike traditional Q-Learning, which relies on a Q-table, DQN uses a neural network to approximate Q-values for different actions. This allows the model to handle more complex financial state spaces and learn adaptive budgeting policies that can potentially improve long-term financial stability.
