Page 1 of 1

Module Code - Title:

IN5103 - RISK, ETHICS, GOVERNANCE AND ARTIFICIAL INTELLIGENCE

Year Last Offered:

2020/1

Hours Per Week:

Lecture

2

Lab

0

Tutorial

1

Other

3

Private

4

Credits

6

Grading Type:

N

Prerequisite Modules:

Rationale and Purpose of the Module:

This module aims to support the MSc in Artificial Intelligence by providing a conceptual framework relating to risk, ethics, and governance, that informs AI research. Such frameworks are now standardised across international AI research programs and are no longer optional components. For such reasons, the module aims to build upon this rationale and provide a unique component informed by the established risk, ethics, and governance research group in the KBS. The purpose of the module is to frame AI technologies as technologies that need to be developed in parallel to frameworks that are informed by risk, ethics, and governance.

Syllabus:

1. (a) Define AI (b) Explore AI in different contexts (c) Interrogate the variance of AI design and the potential challenges that variation presents to AI. (d) The societal challenges to AI technologies. (e) Support the framing of AI by developing conceptual frameworks that are informed by questions of risk, ethics and governance. (f) Learn how to engage with emerging and future AI technologies in terms of these frameworks. Accordingly, the module aims to provide students with an understanding of the concepts of risk, ethics, and governance in the context of AI and emerging AI technologies. To achieve this, it is necessary, to first provide a contextual understanding of the concept of AI and the challenges it poses to framing accurate metrics of risk, ethics and governance. 2. The contextual understanding relates to the need to develop an informed technological framework as not only an important metric to AI research itself but also as a metric intrinsic to supporting accurate anticipatory risk and governance research. The challenge of conceptually framing AI is the first knowledge output of the module. 3. The module will use examples of AI products, such as face recognition algorithms, consumer media platforms (Facebook, Spotify, Netflix, Amazon), cloud-based services (IBM Watson, personal assistant's; Siri, Alexa and Cortana) and autonomous vehicles to interrogate how risk and governance, as well as ethical metrics are dependent upon informed technological and conceptual frameworks. 4. With an understanding of the framing challenge of AI the focus moves to consider further considerations of the ethical challenges that relate to the technologies. Particular attention is given to the question of AI autonomous decisions, risks, and related challenges to informed governance. Technological ethics is introduced to the students as an increasingly relevant aspect of AI research and technologies. This reflects an evolving change in AI research which is becoming more supportive of engaging with ethical narratives. The focus on conceptual framing and ethical challenges constitutes the second knowledge output of he module. 5. The module concludes by addressing questions of risk and governance from a number of disciplinary and cross-disciplinary perspectives relating to risk communication, ethical tensions, regulation, and legal contexts. This constitutes the culmination of the module and the third knowledge output.

Learning Outcomes:

Cognitive (Knowledge, Understanding, Application, Analysis, Evaluation, Synthesis)

On successful completion of the module the student will possess: 1. an informed understanding of the concept of AI, AI ethics and the concepts of risk and governance. 2. the ability to question and explore possible risk identification, ethical challenges and understand how they present further challenges relating to governance. 3. the understanding to adequately engage contextual edge cases and forward a context-specific application of the concepts of risk and governance in relation to AI technologies. 4. in a basic manner the ability to conceptually frame emerging AI innovation as different examples of technologies that are socially embedded, autonomous and which present numerous identifiable risks and governance challenges.

Affective (Attitudes and Values)

On successful completion of the module, the student will be able to: 1. Develop an awareness of ethical contexts of AI technologies. 2. Develop an appreciation of AI technologies in the context of acknowledging conceptual meaning, framing and the importance of these metrics to risk and governance.

Psychomotor (Physical Skills)

N/A

How the Module will be Taught and what will be the Learning Experiences of the Students:

1. Edge cases will be utilised to highlight possible challenges to developing informed and accurate risk metrics. 2. The edge cases will communicate a practical understanding of each of the core concepts risk, ethics and governance. The practical analysis will solidify a contextual understanding and reinforce the need to consider such values to develop conceptual frameworks parallel to the innovation cycle. 3. The output from the practical analysis of the edge cases will promote a more informed, nuanced and pragmatic understanding of the many societal, ethical and legal challenges that AI innovation presents. 4. The module will support the student in developing further research that is not only conceptually aligned to the key metrics of risk, ethics and governance but it will also instil a purposeful appreciation of the necessity to consider societal impacts to AI, programming and autonomous technologies.

Research Findings Incorporated in to the Syllabus (If Relevant):

Prime Texts:

A. M. Turing (1950) Computing Machinery and Intelligence , Mind 49: 433-460
McCarthy, J., Minsky, M., Rochester, N., & Shannon, C. E (1955) A proposal for the Dartmouth summer research project on artificial intelligence , http://www.formal.stanford.edu /jmc/history/dartmouth/dartmouth.html
Searle, J. R. (1980) Minds, brains and programs. , Behavioral and Brain Sciences, 3, 417-457
Dreyfus, H. L. (1992) What computers still can't do: A critique of artificial reason , MIT Press
Calo, Ryan (2017) Artificial Intelligence Policy: A Primer and Roadmap , Available at SSRN: https://ssrn.com/abstract=3015350 or http://dx.doi.org/10.2139/ssrn.3015350
Gasser, Urs, and Virgilio A.F. Almeida (2017) A Layered Model for AI Governance , IEEE Internet Computing 21 (6) (November): 58-62. doi:10.1109/mic.2017.4180835
Johnson, D.G. & Verdicchio, M. (2017) Reframing AI Discourse , Minds & Machines 27: 575

Other Relevant Texts:

George F. Luger and Chayan Chakrabarti (2016) From Alan Turing to modern AI: practical solutions and an implicit epistemic stance , Springer London
Gunkel, David (2012) The Machine Question , MIT Press

Programme(s) in which this Module is Offered:

Semester - Year to be First Offered:

Module Leader:

martin.cunneen@ul.ie