The rapid diffusion of artificial intelligence (AI) and data-driven decision systems has fundamentally reshaped organizational processes, managerial judgment, and stakeholder relationships. While technical performance metrics and legal regulations exist, there remains a disconnect between rigorous engineering standards and the subjective expectations of diverse stakeholders. This paper proposes a novel conceptual framework that positions trust as the operational foundation of ethical AI, grounded in Stakeholder Theory. Drawing on stakeholder theory, we argue that ethical AI rests on two foundational dimensions, namely, fairness and safety, which reflect an organization’s moral obligations toward its stakeholders. We conceptualize fairness as the absence of bias, discrimination, and systematic exclusion in data and algorithms, and safety as the protection of privacy, confidentiality, and security across the AI lifecycle. The intersection of these dimensions yields four distinct trust conditions that shape stakeholder acceptance, resistance, or disengagement. By integrating ethics, trust, and stakeholder theory, this paper advances a unifying conceptual model that clarifies how ethical AI generates legitimacy and sustained value. We conclude by outlining practical implications for organizational governance and proposing a future research agenda for management scholars....