Presenter Information

Aaron Brown, Google

Location

South Ballroom, Hemmingson Center

Start Date

3-4-2025 3:30 PM

End Date

3-4-2025 4:15 PM

Description

Chaired by Aaron Crandall, Ph.D. (Gonzaga University)

In this talk, I will draw on 20 years of experience in the tech industry and a decade doing novel security research. The goal of this talk is to highlight some blind spots in the broader security industry using concrete examples as well as discussion of underserved threat models. I will use both high-level discussion of at-risk parties who are underserved by the current security community and concrete examples of threats that uniquely impact these individuals. I will discuss the root cause of overlapping areas of underinvestment and how intersectionality leads to compounding of this issue. I will end with a discussion of the opportunities for AI to help remediate this issue as well as some potential pitfalls that we as a community need to be aware of when applying AI to security for these underserved communities.

I will start the talk with an overview of a key analysis technique used by security professionals, called threat modeling. Threat modeling provides a systematic path for turning abstract ideas about possible risks or concerns into detailed threats to be mitigated or accepted. This allows security practitioners to focus on the threats that are the most serious or that will provide the most efficient risk reduction for the resources invested.

Threat models, however, are not universal. They depend on subjective risk appetites, differing circumstances of subjects or organizations, and the presence of different kinds of threat actors. We summarize this observation in the industry maxim “your threat model is not my threat model”.

This means that threat models for underserved communities may not get the recognition or resources that are required to effectively mitigate them. This problem is exacerbated by the fact that often these areas of need are underfunded along multiple dimensions. Take, for instance, the needs of people who are blind or have low vision. Accessibility is an area that is already significantly underfunded in many cases. The overlap of this funding with the resources available to security further reduces the pool available to address elements of the blind person’s threat model that are unique and not shared with the wider populace.

Now consider other intersectional areas that might impact a person’s threat model. Their cultural or geo-political context, their refugee or immigration status, their ethnic background or perceived racial identity, their sexuality, etc. All of these elements increase the complexity of their threat model and reduce the pool of resources that society has available to address the intersection of these traits.

In order to solidify this point, I’ll be using one concrete example of homophone attacks. These are attacks in which “sound alike” text is used to trick users of screen readers (typically blind or low-vision folks) into taking an action that isn’t in their best interest or that they wouldn’t choose themselves. (E.g. social engineering or phishing attacks attempting to steal credentials). I’ll then explore ways that this could intersect with other aspects of identity to further target marginalized users.

After that, I’ll briefly explore a few other notable threat models that may interact with this system, including victims of intimate partner violence, children of abusive parents, sex workers, and drug addicts.

Finally I will pivot to the promise and peril of AI for all of these issues. The promise of AI lies in connecting people with resources and with helping them gather information to assess their own threat models. It can also tear down barriers that might keep users from being unable to protect themselves, such as by providing easy machine translation for security tooling, or giving tailored advice on the best mitigations for specific threats.

The peril of AI is that it might retrench existing inequities in how resources are allocated to threats. If security-conscious AIs or tools containing models are trained only on the dominant group’s threat model, it might deepen the divide that minority groups have to bridge in order to get their security needs met.

Share

COinS
 
Apr 3rd, 3:30 PM Apr 3rd, 4:15 PM

Security for Everyone: Disability, Empathy, and Intersectionality in Cybersecurity

South Ballroom, Hemmingson Center

Chaired by Aaron Crandall, Ph.D. (Gonzaga University)

In this talk, I will draw on 20 years of experience in the tech industry and a decade doing novel security research. The goal of this talk is to highlight some blind spots in the broader security industry using concrete examples as well as discussion of underserved threat models. I will use both high-level discussion of at-risk parties who are underserved by the current security community and concrete examples of threats that uniquely impact these individuals. I will discuss the root cause of overlapping areas of underinvestment and how intersectionality leads to compounding of this issue. I will end with a discussion of the opportunities for AI to help remediate this issue as well as some potential pitfalls that we as a community need to be aware of when applying AI to security for these underserved communities.

I will start the talk with an overview of a key analysis technique used by security professionals, called threat modeling. Threat modeling provides a systematic path for turning abstract ideas about possible risks or concerns into detailed threats to be mitigated or accepted. This allows security practitioners to focus on the threats that are the most serious or that will provide the most efficient risk reduction for the resources invested.

Threat models, however, are not universal. They depend on subjective risk appetites, differing circumstances of subjects or organizations, and the presence of different kinds of threat actors. We summarize this observation in the industry maxim “your threat model is not my threat model”.

This means that threat models for underserved communities may not get the recognition or resources that are required to effectively mitigate them. This problem is exacerbated by the fact that often these areas of need are underfunded along multiple dimensions. Take, for instance, the needs of people who are blind or have low vision. Accessibility is an area that is already significantly underfunded in many cases. The overlap of this funding with the resources available to security further reduces the pool available to address elements of the blind person’s threat model that are unique and not shared with the wider populace.

Now consider other intersectional areas that might impact a person’s threat model. Their cultural or geo-political context, their refugee or immigration status, their ethnic background or perceived racial identity, their sexuality, etc. All of these elements increase the complexity of their threat model and reduce the pool of resources that society has available to address the intersection of these traits.

In order to solidify this point, I’ll be using one concrete example of homophone attacks. These are attacks in which “sound alike” text is used to trick users of screen readers (typically blind or low-vision folks) into taking an action that isn’t in their best interest or that they wouldn’t choose themselves. (E.g. social engineering or phishing attacks attempting to steal credentials). I’ll then explore ways that this could intersect with other aspects of identity to further target marginalized users.

After that, I’ll briefly explore a few other notable threat models that may interact with this system, including victims of intimate partner violence, children of abusive parents, sex workers, and drug addicts.

Finally I will pivot to the promise and peril of AI for all of these issues. The promise of AI lies in connecting people with resources and with helping them gather information to assess their own threat models. It can also tear down barriers that might keep users from being unable to protect themselves, such as by providing easy machine translation for security tooling, or giving tailored advice on the best mitigations for specific threats.

The peril of AI is that it might retrench existing inequities in how resources are allocated to threats. If security-conscious AIs or tools containing models are trained only on the dominant group’s threat model, it might deepen the divide that minority groups have to bridge in order to get their security needs met.