Location
South Ballroom, Hemmingson Center
Start Date
3-4-2025 1:30 PM
End Date
3-4-2025 2:15 PM
Description
Chaired by Anthony Fisher, Ph.D. (Gonzaga University)
Since the advent of the atomic bomb 80 years ago, technology has empowered humankind with world-ending capabilities. These capabilities have continued to multiply over the decades, culminating in the development of artificial intelligence. AI is the ultimate technology, “our final invention,” in the words of some, invoking thoughts not only of the power of AI to help us invent more things, but also the idea that invention might end along with us. All of which might provoke the question: should we really be doing this?
Whenever we see the word “should” we are seeing ethics enter the conversation. Ethics is becoming increasingly important due to the world experiencing inflection points on several long-term trends. One of the longest-term of these trends involves humanity’s growing power due to technology. Technology gives us greater efficiency of action, greater scope of action, and greater determinacy of outcome. In the past, humans were involuntarily constrained by their lack of power; now, greatly empowered, we must learn to be voluntarily constrained by our own good judgment. This is as true for individuals as for civilization itself.
Described in another way, what were once constants in society are becoming variables; or, metaphorically, civilization is changing from a solid to a liquid. Fixed relationships are changing, becoming fluid; what was once taken for granted is no more. In some cases, this might be a liberation from past oppressive norms, but in other cases it might be like randomly mutating an organism – some radiation might be tolerable, but a bit more leads to cancer, and a bit more leads to swift death. If we don’t want civilization to completely liquefy and go down the drain, we need to figure out new ways to maintain as constant those things which must remain constant, while also melting and moving those things which need to be corrected, and then “fixed” into a new place.
This is why technology ethics has become one of the most important topics in the world right now: what we previously had decided for us in our weakness must now be decided by us through our voluntary choice. And if we can choose to have anything – which is the desired endpoint for AI, after all – then it becomes very important not only to choose the right things, but even more so to want the right things. We need to want good outcomes and not bad ones. We need to want not just to avoid bad technology, not just to achieve neutral technology, we need to want truly good technology – and someone has to make that happen.
Which brings us to responsibility. Everyone has a responsibility to live an ethical life as an individual to the best of their abilities, and those with more power are more responsible for what they do. Again, as individuals, as technology empowers us more and more, responsibility rises along with it, and for groups of people as well: organizations, nations, and ultimately all of humankind.
Given the situation at hand, the only solutions are either to weaken ourselves, to decrease our power and thus, like in the past, have more of life decided for us involuntarily (this might happen by our choice or it might be forced upon us by natural or human-made disaster), or to dramatically increase our emphasis on ethics, to the extent that we actually determine how to fix the array of moral problems before us and stop society from going down the drain.
This requires a multi-level, multi-vertical, socio-ethical solution. There are the four levels described above: international, national, organizational, and individual. There are also multiple cross-cutting vertical focus areas: political, religious, educational, cultural, economic, and so on. The objectives of the solution are social and ethical: to give people awareness of problems, the facts needed for understanding the problems and imagine possible solutions, the ability to evaluate solutions as better than others, and the ability to implement these solutions in a reasonable and effective fashion – and to do all of this faster than any solutions of similar magnitude have ever been implemented before.
As we look at the landscape of organizations and actors attempting to help the world with ethical solutions today, we can see how they start to fit into these categories. Some categories are more filled with solutions, and others are less filled, or even empty. Given these circumstances, this presentation will begin to describe the landscape of approaches to AI and responsibility and then examine if there might be some tractable and impactful areas where effort might be added in order to accelerate the needed ethical work to keep up with the rapid growth of AI.
Recommended Citation
Green, Brian, "Technology and Ethical Responsibility" (2025). Value and Responsibility in AI Technologies. 1.
https://repository.gonzaga.edu/ai_ethics/2025/general/1
Technology and Ethical Responsibility
South Ballroom, Hemmingson Center
Chaired by Anthony Fisher, Ph.D. (Gonzaga University)
Since the advent of the atomic bomb 80 years ago, technology has empowered humankind with world-ending capabilities. These capabilities have continued to multiply over the decades, culminating in the development of artificial intelligence. AI is the ultimate technology, “our final invention,” in the words of some, invoking thoughts not only of the power of AI to help us invent more things, but also the idea that invention might end along with us. All of which might provoke the question: should we really be doing this?
Whenever we see the word “should” we are seeing ethics enter the conversation. Ethics is becoming increasingly important due to the world experiencing inflection points on several long-term trends. One of the longest-term of these trends involves humanity’s growing power due to technology. Technology gives us greater efficiency of action, greater scope of action, and greater determinacy of outcome. In the past, humans were involuntarily constrained by their lack of power; now, greatly empowered, we must learn to be voluntarily constrained by our own good judgment. This is as true for individuals as for civilization itself.
Described in another way, what were once constants in society are becoming variables; or, metaphorically, civilization is changing from a solid to a liquid. Fixed relationships are changing, becoming fluid; what was once taken for granted is no more. In some cases, this might be a liberation from past oppressive norms, but in other cases it might be like randomly mutating an organism – some radiation might be tolerable, but a bit more leads to cancer, and a bit more leads to swift death. If we don’t want civilization to completely liquefy and go down the drain, we need to figure out new ways to maintain as constant those things which must remain constant, while also melting and moving those things which need to be corrected, and then “fixed” into a new place.
This is why technology ethics has become one of the most important topics in the world right now: what we previously had decided for us in our weakness must now be decided by us through our voluntary choice. And if we can choose to have anything – which is the desired endpoint for AI, after all – then it becomes very important not only to choose the right things, but even more so to want the right things. We need to want good outcomes and not bad ones. We need to want not just to avoid bad technology, not just to achieve neutral technology, we need to want truly good technology – and someone has to make that happen.
Which brings us to responsibility. Everyone has a responsibility to live an ethical life as an individual to the best of their abilities, and those with more power are more responsible for what they do. Again, as individuals, as technology empowers us more and more, responsibility rises along with it, and for groups of people as well: organizations, nations, and ultimately all of humankind.
Given the situation at hand, the only solutions are either to weaken ourselves, to decrease our power and thus, like in the past, have more of life decided for us involuntarily (this might happen by our choice or it might be forced upon us by natural or human-made disaster), or to dramatically increase our emphasis on ethics, to the extent that we actually determine how to fix the array of moral problems before us and stop society from going down the drain.
This requires a multi-level, multi-vertical, socio-ethical solution. There are the four levels described above: international, national, organizational, and individual. There are also multiple cross-cutting vertical focus areas: political, religious, educational, cultural, economic, and so on. The objectives of the solution are social and ethical: to give people awareness of problems, the facts needed for understanding the problems and imagine possible solutions, the ability to evaluate solutions as better than others, and the ability to implement these solutions in a reasonable and effective fashion – and to do all of this faster than any solutions of similar magnitude have ever been implemented before.
As we look at the landscape of organizations and actors attempting to help the world with ethical solutions today, we can see how they start to fit into these categories. Some categories are more filled with solutions, and others are less filled, or even empty. Given these circumstances, this presentation will begin to describe the landscape of approaches to AI and responsibility and then examine if there might be some tractable and impactful areas where effort might be added in order to accelerate the needed ethical work to keep up with the rapid growth of AI.