This Fall, I’m teaching a first year seminar on Responsible AI (Data Science 195: Topics in Data Science).
This course critiques how the AI industries integrate societal values into their products, questioning whose values are being represented.
Students will connect responsible AI with the data used to train AI models. Key themes include transparency, fairness, accountability, and privacy.
Emphasizing responsibility by design, students will research how specific companies and nations are aligning AI to ultimately foster an inclusive and safe AI-driven world.
In this course, we'll be guided by a series of essential questions. Essential questions are thought-provoking, open-ended inquiries that encourage deep thinking and reflection.
The questions are designed to help you explore complex issues and connect your learning to real-world applications.
In this course, these questions will shape our discussions, guide our assignments, and help us critically examine the role of AI in society.
The essential questions for this course:
How will artificial intelligence shape our future?
What ethical considerations must guide its development?
How does the AI industry integrate societal values into their products and whose values are being represented?
How does the data used to train AI models address key themes of transparency, fairness, accountability, and privacy?
How are companies and nations aligning AI to foster an inclusive and safe AI-driven world through responsibility by design?
As students engage with these questions, they will consider how these questions relate to the readings, case studies, and practical examples we will explore. Their thoughts and reflections on these essential questions will be incorporated into class discussions and assignments. By continuously revisiting these questions, the aim is for students to develop a nuanced viewpoint of responsible AI and be better prepared to contribute to ethical initiatives in their future careers.