This research introduces a dynamic portfolio optimization framework based on the Proximal Policy Optimization (PPO) reinforcement learning algorithm that is known to be stable and perform optimally in continuous decision making. The proposed approach seeks to maximize long term returns in the portfolio while taking care of risks and transaction expenses in a volatile financial market. Utilizing the open source framework FinRL, the framework incorporates historical market data, technical indicators, and transaction cost constraints into a Markov Decision Process (MDP). There are rolling window features of asset returns and portfolio allocations in the state space, whereas the action space in determining optimal weight distributions in several assets. The aim is to represent the risk adjusted return of the portfolio by the reward function. PPO’s concisely defined objective and entropy regularization induces optimal efficient policy updates and exploration exploitation behavior. The experimental results demonstrate that the model has superior cumulative return and Sharpe ratio vs. traditional benchmarks and, therefore, have white paper potential in actual, AI-driven investment strategy in a trading environment.