In this blog post, we summarise our comments to the Working Document: Enforcement Mechanisms for Responsible #AIforAll (Working Document) in response to the call for comments from all stakeholders by NITI Aayog. Our full response is accessible here.
The response intends to offer a solution-oriented feedback to the Working Document released by NITI Aayog in November 2020. It is divided into two sections:
Section I: A Framework to Identify High Risk Applications of AI.This section responds to a specific invitation for comments in the Working Document on a framework to identify high-risk applications. Therefore, this section presents a preliminary AI risk matrix that provides a framework to rank risks posed by different uses-cases of AI, regardless of the sector to which it belongs. This risk matrix is predicated on four indicators and further sub-indicators for assessing high-risk AI applications, including: (i) the probability of risk; (ii) the scale of potential impact; (iii) the degree of autonomy of an AI system, and (iv) the severity of potential impact. The infographic of our risk matrix is set out below.
Section II: Feedback on the roles of the Oversight Body. This section provides feedback on the roles of the Oversight Body envisaged in the Working Document. The section recommends seventeen functions that the Oversight Body can perform to discharge the roles envisaged in the Working Document, and conform to the Principles for Responsible AI identified in the “Working Document Towards Responsible #AIforAll”. An infographic presenting the seventeen functions is set out below. The functions set out in the white rectangles are our suggestions to support the Roles envisaged for the Oversight Body in the Working Document.