Location
Wolff Auditorium, Jepson Center
Start Date
4-4-2025 3:15 PM
End Date
4-4-2025 4:15 PM
Description
Chaired by Kirk Besmer, Ph.D. (Gonzaga University)
AI has quickly become synonymous in global discussions with power. AI dominance, whether in the US’s trillion-dollar tech firms, in China’s 14th Five-Year Plan, or in the EU’s AI Act is seen by global power players as the key to political dominance. China’s revelation of DeepSeek sent panics through the American AI ecosystem precisely because it disrupted carefully maintained strategies to prevent Chinese AI prominence through hardware and data embargoes.
In the midst of the power-jockeying of centi-billionaires and heads of state, AI’s presence has mainly been imposed on the masses. Aside from cases detailed by scholars like Virginia Eubanks, Ruha Benjamin and Cathy O’Neil, who emphasize how algorithms are routinely used to penalize marginalized groups merely for being marginalized, less noxious cases still indicate AI being thrust upon us regardless of its benefit. Education is one of the more apparent arenas for this: instructors have to combat abuses of LLMs facilitating disengagement while being told we need to prepare students for the future when they will be expected to use AI.
What would it be, then, to subject AI to democratic interests, to make the AI future not one dictated by technocrats but attentive to genuine human interests? In the terminology of philosopher Andrew Feenberg, this would constitute the creation and use of a “democratic” AI over the current hegemonic model. A starting place will be the participation of the general population, especially less-technologically inclined, in the development and aims of AI. Crowdsourcing, for example, offers one solution where lay persons can contribute by writing code, tagging data, creating new data sets, or just providing feedback. Other intentional “democratic” strategies like Decidim or IE University’s AI4Democracy incorporate AI into politics to better promote democratic participation.
But is this a sufficient account of democratic interests? Safiya Noble notes that algorithms trained on mass data sets do not guarantee a clear sense of “democratic” as much as they reinforce biases rampant in a given society. Thus, scholars and activists have highlighted the dangers of AI employed in government functions, including predictive policing, recidivism prediction, facial recognition, automated welfare systems, and more. Without critical voices, AI becomes a substitute for democratic action, offloading the task of political negotiation and overriding people’s rights.
Thus, a critical component in the consideration of democratic AI is the intent behind the project. Critical Theory of Technology philosopher Andrew Feenberg contends that technologies are created with underlying “technical codes,” that they have some intended aim supporting and promoting certain values. Technologies do not exist in a vacuum, but rather within societies, where they are associated with and directed toward certain aims.
The rapid growth in AI in the past decade, after a long winter, is clearly tied to large tech firms’ promotion of AI research for increased profit. The hope in AI lies in a larger trend of digital capitalism, a movement away from earlier forms of capitalism into one where digital technologies are pivotal for capital growth. Thus, consulting firms like McKinsey and PWC predict trillions of dollars in the global economy tied to AI, WEF predicts massive job restructuring because of AI, and China plans their future economic growth through AI dominance. Most AI projects, then, are contextualized by a digital capitalist technical code, even those in the semi-controlled economy of China.
Feenberg’s proposal for democratic technology entails enshrining democratic technical codes. This will entail, for example, smaller coalitions developing technologies, and promotion of individual and social needs over profit. Thus, a democratic AI would be one that is no longer directed toward a consolidation of power, maximizing profit (or minimizing costs), or promoting a singular dominant ideology.
Feenberg’s approach lines up surprisingly well with liberation theology in the Catholic tradition. A parallel between Feenberg’s hegemonic and democratic technical codes can be found in what Jon Sobrino calls the civilizations of wealth and poverty. The civilization of wealth is directed to profit and individual liberty; it allows free market exploitation of labor and nature for the sake of capitalist notions of “progress.” The civilization of poverty, on the other hand, promotes human rights and collective well-being over technological or economic dominance.
Thus, a possibly more radical, though surprisingly more Christian, approach to democratic AI would be to adopt the “option for the poor (and marginalized).” Rather than negotiating between disparate participant interests, especially insofar as some in the hegemony will inevitably only ever act on bad faith, the guarantee of the worst off will be a way to ensure that all have (some) interests met.
Inevitably, this option leads to some significant challenges. First, there is the concern of why anyone would prefer the option for the poor rather than the logic of domination. From a philosophical perspective, one can suggest a number of arguments, such as Rawls’s maximin position, Habermas’s discourse ethics, or various critical theory approaches. But, in true Nietzschean fashion, the powerful will likely not cede their own vested position. Among Christians, as is true for most religions, there is strong doctrinal justification for prioritizing the weak, but ultimately, like all movements for liberation of the past century, the truth of this perspective will be a hard fight.
Second, there is the challenge of implementing this in the design of AI. Tech firms demand returns on investments to justify their projects. Designing AI to redistribute resources and power to benefit the worst off is unlikely to meet the approval of boards of directors. Government grants may be a preferable alternative then, but these often are directed toward particular aims of those who hold governmental purse strings.
Thus, the third and final challenge is whether AI itself can be adapted to a non-capitalistic framework. The mathematical model all AI are built on requires transforming all values into quantities that can be calculated, leading easily to what Weber called instrumental rationality, wherein everything becomes estimated according to its use value.
Ultimately, the question of democratic AI remains open. Perhaps it will never be truly democratic, but the above parameters can serve as indicators of the democratic-compatibility of AI technologies.
Recommended Citation
Checketts, Levi, "Keynote: Whose Ghost in the Machine? AI, Critical Theory and Democracy" (2025). Value and Responsibility in AI Technologies. 12.
https://repository.gonzaga.edu/ai_ethics/2025/general/12
Keynote: Whose Ghost in the Machine? AI, Critical Theory and Democracy
Wolff Auditorium, Jepson Center
Chaired by Kirk Besmer, Ph.D. (Gonzaga University)
AI has quickly become synonymous in global discussions with power. AI dominance, whether in the US’s trillion-dollar tech firms, in China’s 14th Five-Year Plan, or in the EU’s AI Act is seen by global power players as the key to political dominance. China’s revelation of DeepSeek sent panics through the American AI ecosystem precisely because it disrupted carefully maintained strategies to prevent Chinese AI prominence through hardware and data embargoes.
In the midst of the power-jockeying of centi-billionaires and heads of state, AI’s presence has mainly been imposed on the masses. Aside from cases detailed by scholars like Virginia Eubanks, Ruha Benjamin and Cathy O’Neil, who emphasize how algorithms are routinely used to penalize marginalized groups merely for being marginalized, less noxious cases still indicate AI being thrust upon us regardless of its benefit. Education is one of the more apparent arenas for this: instructors have to combat abuses of LLMs facilitating disengagement while being told we need to prepare students for the future when they will be expected to use AI.
What would it be, then, to subject AI to democratic interests, to make the AI future not one dictated by technocrats but attentive to genuine human interests? In the terminology of philosopher Andrew Feenberg, this would constitute the creation and use of a “democratic” AI over the current hegemonic model. A starting place will be the participation of the general population, especially less-technologically inclined, in the development and aims of AI. Crowdsourcing, for example, offers one solution where lay persons can contribute by writing code, tagging data, creating new data sets, or just providing feedback. Other intentional “democratic” strategies like Decidim or IE University’s AI4Democracy incorporate AI into politics to better promote democratic participation.
But is this a sufficient account of democratic interests? Safiya Noble notes that algorithms trained on mass data sets do not guarantee a clear sense of “democratic” as much as they reinforce biases rampant in a given society. Thus, scholars and activists have highlighted the dangers of AI employed in government functions, including predictive policing, recidivism prediction, facial recognition, automated welfare systems, and more. Without critical voices, AI becomes a substitute for democratic action, offloading the task of political negotiation and overriding people’s rights.
Thus, a critical component in the consideration of democratic AI is the intent behind the project. Critical Theory of Technology philosopher Andrew Feenberg contends that technologies are created with underlying “technical codes,” that they have some intended aim supporting and promoting certain values. Technologies do not exist in a vacuum, but rather within societies, where they are associated with and directed toward certain aims.
The rapid growth in AI in the past decade, after a long winter, is clearly tied to large tech firms’ promotion of AI research for increased profit. The hope in AI lies in a larger trend of digital capitalism, a movement away from earlier forms of capitalism into one where digital technologies are pivotal for capital growth. Thus, consulting firms like McKinsey and PWC predict trillions of dollars in the global economy tied to AI, WEF predicts massive job restructuring because of AI, and China plans their future economic growth through AI dominance. Most AI projects, then, are contextualized by a digital capitalist technical code, even those in the semi-controlled economy of China.
Feenberg’s proposal for democratic technology entails enshrining democratic technical codes. This will entail, for example, smaller coalitions developing technologies, and promotion of individual and social needs over profit. Thus, a democratic AI would be one that is no longer directed toward a consolidation of power, maximizing profit (or minimizing costs), or promoting a singular dominant ideology.
Feenberg’s approach lines up surprisingly well with liberation theology in the Catholic tradition. A parallel between Feenberg’s hegemonic and democratic technical codes can be found in what Jon Sobrino calls the civilizations of wealth and poverty. The civilization of wealth is directed to profit and individual liberty; it allows free market exploitation of labor and nature for the sake of capitalist notions of “progress.” The civilization of poverty, on the other hand, promotes human rights and collective well-being over technological or economic dominance.
Thus, a possibly more radical, though surprisingly more Christian, approach to democratic AI would be to adopt the “option for the poor (and marginalized).” Rather than negotiating between disparate participant interests, especially insofar as some in the hegemony will inevitably only ever act on bad faith, the guarantee of the worst off will be a way to ensure that all have (some) interests met.
Inevitably, this option leads to some significant challenges. First, there is the concern of why anyone would prefer the option for the poor rather than the logic of domination. From a philosophical perspective, one can suggest a number of arguments, such as Rawls’s maximin position, Habermas’s discourse ethics, or various critical theory approaches. But, in true Nietzschean fashion, the powerful will likely not cede their own vested position. Among Christians, as is true for most religions, there is strong doctrinal justification for prioritizing the weak, but ultimately, like all movements for liberation of the past century, the truth of this perspective will be a hard fight.
Second, there is the challenge of implementing this in the design of AI. Tech firms demand returns on investments to justify their projects. Designing AI to redistribute resources and power to benefit the worst off is unlikely to meet the approval of boards of directors. Government grants may be a preferable alternative then, but these often are directed toward particular aims of those who hold governmental purse strings.
Thus, the third and final challenge is whether AI itself can be adapted to a non-capitalistic framework. The mathematical model all AI are built on requires transforming all values into quantities that can be calculated, leading easily to what Weber called instrumental rationality, wherein everything becomes estimated according to its use value.
Ultimately, the question of democratic AI remains open. Perhaps it will never be truly democratic, but the above parameters can serve as indicators of the democratic-compatibility of AI technologies.