Accountability in AI – Promoting Greater Social Trust
Draft – For discussion purposes only; does not represent the views of the G7 or its members. This discussion paper was drafted to guide the discussion during the breakout sessions at the December 6, 2018, G7 Multistakeholder Conference on Artificial Intelligence in Montreal, Canada.
This paper was developed at the request of the Government of Canada to support the G7 Multistakeholder Conference on Artificial Intelligence: Enabling the Responsible Adoption of AI on December 6, 2018. Co-leads from Canada and Japan developed this paper on accountability, the intent of which is to provide a starting point for discussions on the topic of Accountability in AI: Promoting Greater Social Trust at the conference. This paper and the discussion builds on work that started at the 2016 Takamatsu ICT Ministerial Meeting and led, most recently, to the Charlevoix Common Vision for the Future of Artificial Intelligence.Footnote 1
This paper is organized into two sections. The first provides information on work to date in this domain and sets out various concepts and distinctions worth noting when thinking about accountability and trust in AI. The second section reports on the consultation process and discusses potential actions for different stakeholder groups for the future.
Seven questions, organized under three broad headings, are proposed for framing the discussions at the conference:
- What are some shared principles for Artificial Intelligence (AI) accountability in all sectors?
- How do we determine which AI systems require more rigorous accountability regimes for their appropriate governance?
- Given that trust can be misplaced—individuals can over- and under-trust AI—how can accountability regimes promote the development of trustworthy AI that is appropriately trusted?
- How do we balance accountability with innovation so that the benefits of AI are responsibly and inclusively secured?
- How can we ensure a representative and diverse plurality of voices and perspectives in the development of international and national accountability regimes for AI?
- What mechanisms (regulatory vs. non-regulatory) are most appropriate to govern various applications of algorithmic decision-making?
- What role should different stakeholders (e.g. governments; international organizations; private developers, service providers and users; the legal system; etc.) play in ensuring accountability in AI, and coordination across jurisdictional and cultural boundaries?
Discussion Paper: Accountability in AI (PDF, 396 KB)
- Date modified: