UGA Bulletin Logo

Ethics and Artificial Intelligence


Course Description

AI-based machines are making life-and-death decisions out in the world and can lead to additional bias and injustice in society. Examination of ethical issues and obligations related to AI. Topics include politics of technology, data collection and mining, algorithm creation and transparency, and the use and future of AI.

Additional Requirements for Graduate Students:
Graduate student grades will reflect a deeper and richer understanding of the course material either by answering more or more difficult questions in quizzes or exams, or by writing additional or longer papers.


Athena Title

Ethics and AI


Prerequisite

PHIL 2030 or PHIL 2030E or PHIL 2030H or CSCI 3030 or CSCI 3030E or CSCI 3030H or CSCI(PHIL) 4550/6550 or a 3000-level PHIL course


Semester Course Offered

Offered spring


Grading System

A - F (Traditional)


Course Objectives

Students who are successful in this course will: 1. Explain ethical positions and problems related to artificial intelligence. 2. Explain aspects of artificial intelligence in relation to its effects on individuals and society. 3. Take and defend ethical positions on AI topics.


Topical Outline

Objectivity and technology—Some people think technology is value-neutral; why might they be wrong? What are some examples from the history of technology that show us why we should be a bit skeptical of claims of objectivity or value neutrality? Data collection and mining—What ethical obligations are there for respecting the privacy of individuals when aggregating data? Should social networking sites be more transparent about the data they collect? Algorithm creation and transparency—How does bias get built into algorithms? What steps can be taken to prevent it? What obligations does a programmer have to create an impartial algorithm? The use of AI—In what ways can AI influence policy? How has machine learning been used to create (or prevent) social injustices? What obligations does a researcher at a startup have to consider justice issues when creating programs? The future of AI—Could we create robots that we would have ethical obligations toward? Is there anything wrong with creating AI-based tech that could be implanted into a human?


Syllabus