The rapid diffusion of large language models (LLMs) into enterprise settings has spawned an emergent phenomenon: Shadow AI—unauthorized, unsanctioned use of AI tools by employees. While these tools offer productivity enhancements, they simultaneously pose significant regulatory, operational, and reputational risks. This study presents a comprehensive mixed-methods analysis of Shadow AI through a simulated enterprise dataset (n=215) and qualitative failure narratives. Findings highlight key risk domains including data leakage, model hallucination, compliance breaches, and shadow process automation, with a notable 41% of employees admitting to LLM use without organizational approval. Regression models reveal policy absence, lack of training, and task pressure as leading predictors of Shadow AI risk. This paper provides detailed visualizations, risk matrices, and a governance framework, and concludes with actionable policy and compliance recommendations for enterprise AI managers