A structured process used to identify, assess, and mitigate potential risks associated with an AI system before and during its deployment. An AIIA evaluates factors such as fairness, transparency, privacy, and accountability, helping organisations demonstrate compliance with AI regulations and responsible innovation practices.