Location

Wolff Auditorium, Jepson Center

Start Date

4-4-2025 10:45 AM

End Date

4-4-2025 11:30 AM

Description

Chaired by Graham Morehead, Ph.D. (Gonzaga University)

Current ways of developing and deploying AI have presented young people and their educators with ‘tragic dilemmas’: they have to engage with AI without being able to choose an all-things-considered right thing to do. Tragic dilemmas involve pressures on the alignment of beliefs, commitments, and motivations of teachers and students, and ultimately impact on their integrity and wellbeing. I argue that focusing on the issue of ‘tragic dilemmas’ can help those in higher education to rethink and refine the usual recommendations for digital ethics and approaches to AI, such as setting institutional policies, changing assessment strategies, and developing character education programmes. I suggest that these typical approaches will be most effective if structures are put in place to support the integrity of teachers and students, with sensitivity to their specific roles and responsibilities in relation to uses of AI. Drawing on experiences from two digital ethics courses run in higher educational institutions in Asia, I will open up two cases for further discussion: using AI tools for English language editing whilst being aware of language bias, and keeping the environmental cost of AI in the picture whilst equipping students to use it well.

Share

COinS
 
Apr 4th, 10:45 AM Apr 4th, 11:30 AM

AI and ‘Tragic Dilemmas’ in Education

Wolff Auditorium, Jepson Center

Chaired by Graham Morehead, Ph.D. (Gonzaga University)

Current ways of developing and deploying AI have presented young people and their educators with ‘tragic dilemmas’: they have to engage with AI without being able to choose an all-things-considered right thing to do. Tragic dilemmas involve pressures on the alignment of beliefs, commitments, and motivations of teachers and students, and ultimately impact on their integrity and wellbeing. I argue that focusing on the issue of ‘tragic dilemmas’ can help those in higher education to rethink and refine the usual recommendations for digital ethics and approaches to AI, such as setting institutional policies, changing assessment strategies, and developing character education programmes. I suggest that these typical approaches will be most effective if structures are put in place to support the integrity of teachers and students, with sensitivity to their specific roles and responsibilities in relation to uses of AI. Drawing on experiences from two digital ethics courses run in higher educational institutions in Asia, I will open up two cases for further discussion: using AI tools for English language editing whilst being aware of language bias, and keeping the environmental cost of AI in the picture whilst equipping students to use it well.