Location
Wolff Auditorium, Jepson Center
Start Date
4-4-2025 12:30 PM
End Date
4-4-2025 1:15 PM
Description
Panel chaired by Erik Schmidt, Ph.D. (Gonzaga University)
This panel will feature the following speakers:
Chase Bollig. Gonzaga University
The release of ChatGPT and other widely-available AI tools for writing and text generation represent a significant disruption to the teaching and learning of writing across the curriculum. As part of this conversation about AI and education, I share experiences of teaching with and about AI in first-year writing over the last five semesters, including how students’ boundary-setting behaviors reveal implicit ideologies about writing and how students describe their own frameworks for ethical engagement with AI in writing classes. This technology shows promise for students who develop rhetorical sensibilities for navigating AI. At the same time, as a literacy technology, AI poses new challenges to faculty to articulate the value of friction in reading and writing during a period of shifting literacy practices and emerging standards for AI literacy.
Anny Case. Gonzaga University
Highly regarded literacy researchers, Kalantzis and Cope, call Generative AI “a literacy machine” (2024, p. 6) poised to reshape how we produce, consume, and interact with text. While its full impact remains unclear, GenAI will undoubtedly have consequences for K-12 education. Will GenAI make the work of teaching harder or easier? Will students become more or less empowered as readers, writers, thinkers, and communicators? We could harness AI’s potential to liberate literacy learning from contrived and formulaic tasks and refocus on safeguarding the distinctly human qualities and purposes of reading, writing, and communicating in multimodal ways and spaces. Or we could use it to supercharge the status quo. Along these lines, there is an impulse in some educational circles to respond to AI not with curiosity about its potential to deepen and expand learning, but with intensified surveillance—fixating on plagiarism, deploying AI-detection software, and policing students’ use of technology in order to protect long-standing hierarchies and practices in education. Yet other more imaginative alternatives warrant our best thinking and creativity. What if AI functions not simply as an educational shortcut, but as a spark to ignite our collective imagination about the structures and aims of schools?
John Correia. Gonzaga University
As educators, recognizing and planning for the impact of AI on education has never been so important. There are bright and dark sides to AI’s use in education which create both great opportunities and risks. I believe through the risks and negative impacts on education we will have an opportunity to further develop, or better focus on, our collective purpose and improve our methodology of preparing our students for a future that includes AI. In the end, if we engage with the challenge and stay open to new opportunities, we will be better at educating students for a future that includes AI.
Charlie Lassiter. Gonzaga University
One challenge in thinking carefully about difficult topics is getting the relevant facts. For example, when thinking about transgender individuals in sports, biological facts are relevant but so are a wide swath of cultural, historical, and sociological facts. Dismissing any one of these ends up with a lopsided understanding of important issues. AI tools, particularly Elicit, Storm, and NotebookLM, provide easy ways of gathering and organizing information quickly. An upshot for students is that they are better equipped to dive into the normative issues. There are challenges, of course. LLMs hallucinate and produce bullshit. Students don’t know what they don’t know and can accept the bullshit and hallucinations at face value. Even given these challenges, I argue that, when used in the right sorts of supervised contexts, the benefits are worth the risks.
Rachel S. Robertson. Hong Kong Baptist University
Current ways of developing and deploying AI have presented young people and their educators with ‘tragic dilemmas’: they have to engage with AI without being able to choose an all-things-considered right thing to do. Tragic dilemmas involve pressures on the alignment of beliefs, commitments, and motivations of teachers and students, and ultimately impact on their integrity and wellbeing. I argue that focusing on the issue of ‘tragic dilemmas’ can help those in higher education to rethink and refine the usual recommendations for digital ethics and approaches to AI, such as setting institutional policies, changing assessment strategies, and developing character education programmes. I suggest that these typical approaches will be most effective if structures are put in place to support the integrity of teachers and students, with sensitivity to their specific roles and responsibilities in relation to uses of AI. Drawing on experiences from two digital ethics courses run in higher educational institutions in Asia, I will open up two cases for further discussion: using AI tools for English language editing whilst being aware of language bias, and keeping the environmental cost of AI in the picture whilst equipping students to use it well.
Recommended Citation
Bollig, Chase; Case, Anny; Correia, John; Lassiter, Charles; and Robertson, Rachel S., "Panel: AI in Education" (2025). Value and Responsibility in AI Technologies. 9.
https://repository.gonzaga.edu/ai_ethics/2025/general/9
Panel: AI in Education
Wolff Auditorium, Jepson Center
Panel chaired by Erik Schmidt, Ph.D. (Gonzaga University)
This panel will feature the following speakers:
Chase Bollig. Gonzaga University
The release of ChatGPT and other widely-available AI tools for writing and text generation represent a significant disruption to the teaching and learning of writing across the curriculum. As part of this conversation about AI and education, I share experiences of teaching with and about AI in first-year writing over the last five semesters, including how students’ boundary-setting behaviors reveal implicit ideologies about writing and how students describe their own frameworks for ethical engagement with AI in writing classes. This technology shows promise for students who develop rhetorical sensibilities for navigating AI. At the same time, as a literacy technology, AI poses new challenges to faculty to articulate the value of friction in reading and writing during a period of shifting literacy practices and emerging standards for AI literacy.
Anny Case. Gonzaga University
Highly regarded literacy researchers, Kalantzis and Cope, call Generative AI “a literacy machine” (2024, p. 6) poised to reshape how we produce, consume, and interact with text. While its full impact remains unclear, GenAI will undoubtedly have consequences for K-12 education. Will GenAI make the work of teaching harder or easier? Will students become more or less empowered as readers, writers, thinkers, and communicators? We could harness AI’s potential to liberate literacy learning from contrived and formulaic tasks and refocus on safeguarding the distinctly human qualities and purposes of reading, writing, and communicating in multimodal ways and spaces. Or we could use it to supercharge the status quo. Along these lines, there is an impulse in some educational circles to respond to AI not with curiosity about its potential to deepen and expand learning, but with intensified surveillance—fixating on plagiarism, deploying AI-detection software, and policing students’ use of technology in order to protect long-standing hierarchies and practices in education. Yet other more imaginative alternatives warrant our best thinking and creativity. What if AI functions not simply as an educational shortcut, but as a spark to ignite our collective imagination about the structures and aims of schools?
John Correia. Gonzaga University
As educators, recognizing and planning for the impact of AI on education has never been so important. There are bright and dark sides to AI’s use in education which create both great opportunities and risks. I believe through the risks and negative impacts on education we will have an opportunity to further develop, or better focus on, our collective purpose and improve our methodology of preparing our students for a future that includes AI. In the end, if we engage with the challenge and stay open to new opportunities, we will be better at educating students for a future that includes AI.
Charlie Lassiter. Gonzaga University
One challenge in thinking carefully about difficult topics is getting the relevant facts. For example, when thinking about transgender individuals in sports, biological facts are relevant but so are a wide swath of cultural, historical, and sociological facts. Dismissing any one of these ends up with a lopsided understanding of important issues. AI tools, particularly Elicit, Storm, and NotebookLM, provide easy ways of gathering and organizing information quickly. An upshot for students is that they are better equipped to dive into the normative issues. There are challenges, of course. LLMs hallucinate and produce bullshit. Students don’t know what they don’t know and can accept the bullshit and hallucinations at face value. Even given these challenges, I argue that, when used in the right sorts of supervised contexts, the benefits are worth the risks.
Rachel S. Robertson. Hong Kong Baptist University
Current ways of developing and deploying AI have presented young people and their educators with ‘tragic dilemmas’: they have to engage with AI without being able to choose an all-things-considered right thing to do. Tragic dilemmas involve pressures on the alignment of beliefs, commitments, and motivations of teachers and students, and ultimately impact on their integrity and wellbeing. I argue that focusing on the issue of ‘tragic dilemmas’ can help those in higher education to rethink and refine the usual recommendations for digital ethics and approaches to AI, such as setting institutional policies, changing assessment strategies, and developing character education programmes. I suggest that these typical approaches will be most effective if structures are put in place to support the integrity of teachers and students, with sensitivity to their specific roles and responsibilities in relation to uses of AI. Drawing on experiences from two digital ethics courses run in higher educational institutions in Asia, I will open up two cases for further discussion: using AI tools for English language editing whilst being aware of language bias, and keeping the environmental cost of AI in the picture whilst equipping students to use it well.