As artificial intelligence (AI) continues to revolutionize industries and societies, the need for a robust legal framework to regulate its development and deployment has become imperative. AI poses unique challenges concerning data privacy, accountability, bias, intellectual property, and ethical considerations, necessitating comprehensive and adaptive legal mechanisms. This research paper examines the global legal frameworks for AI regulation, conducting a comparative analysis of regulatory approaches adopted by different jurisdictions, including the European Union, the United States, China, and India. The paper critically analyzes the European Union’s AI Act, one of the most structured regulatory frameworks, which categorizes AI applications based on risk and imposes strict compliance measures. In contrast, the United States follows a sectoral and self-regulatory approach, relying on existing laws such as the Algorithmic Accountability Act and guidelines from agencies like the FTC and NIST. China’s approach is state-driven and security-focused, with strict government oversight through laws like the Personal Information Protection Law (PIPL) and AI-specific regulations. India, while still developing a formal AI regulation, relies on existing data protection laws and sectoral guidelines, with the Digital India Act and AI advisory frameworks playing a pivotal role. The study identifies key trends, including risk-based regulation, ethical AI guidelines, liability frameworks, and cross-border governance challenges. It also explores the role of international cooperation in harmonizing AI laws, ensuring responsible AI deployment while fostering innovation. The paper concludes with recommendations for a balanced regulatory approach, advocating for a harmonized global framework that ensures AI safety, accountability, and ethical compliance without stifling technological progress.