AI systems have evolved, permeating deep within human interaction to make an impact in every industry around the world. Yet in this heartening pace of growth and evolution, also come along huge challenges that revolve primarily around transparency and accountability. The chapter continues into the crucial foundations for this, which has been explored in prior chapters that pivot around AI systems as being both complex and risky. The first section describes the main challenges to AI transparency, mainly that complex algorithms are opaque and have poorly understood decision-making processes with no standard frameworks for improved transparency. It explores how these issues erode trust and accountability, which can translate into moral quandaries, prejudiced results or frustrations. In its second part, the paper sets out policy-driven solutions to improve transparency and accountability in AI systems. It offers technical insight into the structural and semantic design of interpretable models, explaining code to implement explainable AI (XAI) with special focus on attention mechanisms for gene ranking. The Policy also underscores the need to make these systems transparent and accountable throughout their lifecycle, by involving all relevant stakeholders early on in a multidisciplinary process of co-creation-consistently which includes monitoring that is long-term and periodic evaluation as they come into use. This chapter aims at suggesting an actionable and practical pathway based on our research to the solution for these challenges as more of a common way forward which all the stakeholders in AI could potentially follow, thereby enabling responsible development and deployment of AI system ensuring that we can use them only for societal good without compromising the very essence of transparency & accountability