Presenter Information

Ted McCullough, Adobe

Location

Wolff Auditorium, Jepson Center

Start Date

4-4-2025 10:00 AM

End Date

4-4-2025 10:45 AM

Description

Chaired by Logan Axon, Ph.D. (Gonzaga University)

Historically, businesses have had a Code of Conduct, and, in some cases, a Code of Ethics to guide employee internal and external behavior. When employees are faced with an ethical dilemma or an ambiguous situation not covered in the Code of Conduct, the principles articulated in the Code of Ethics help guide the decision maker. A classic example of ambiguity is in the use and applicability of non-disclosure agreements. A Code of Conduct rule might state “employees must not disclose confidential company information to outside parties” with the implication that this hold true even if it is related to potential unethical practices. By contrast, a Code of Ethics value might be a company is “committed to transparency and honesty in all business dealings.” To resolve the conflict between the rule and the value, an employee will need to reason to a conclusion.

The differences between Codes of Conduct and Ethics has historically not been that important when it comes to the actual creation of products, as products have tended to manifest a quality of Value Adherence or Value Adjacency. Value Adherence is the idea that the product as sold reflects a Code of Conduct or Ethics (e.g., “The customer is always right”), but itself does not contain the rule or value of a code. For example, the customer may always be right, but this does not impact the quality of the hamburger they have purchased. Value Adjacency is the idea that the product as sold and the rules or values do not overlap. Enron has a Code of Ethics, but this did not stop it from engaging in fraud.

In contrast to the concepts of Value Adherence and Value Adjacency, is the concept Value Inherence. With Value Inherence, the company’s (and its employees) values are directly incorporated into its product as part of the product itself. This is what makes ML based products different from the historical relationship between products and rules, values. The phenomena of Value Inherence raises the following question: i) Why is there Value Inherence?, and ii) How do we ensure that a company’s values are reflected in its ML based products?

To understand Value Inherence, one needs to first understand bias as a technical feature of ML. Bias is the “difference between this [an] estimators [ML Model] expected value and the true value of the parameters being estimated”. Further, “Any discussion of bias depends on the unknown true function”. Much of ML is dedicated to minimizing technical bias and finding the true function that fits the data (facts), or a desired outcome. Example strategies for solving technical bias include: a) Data preparation (e.g., Normalization Techniques, Data Labelling); or b) Model Training (e.g., Data Penalty Algorithms). However, these strategies can also facilitate bias because of the quality of Value Inherence where the ML’s expected value does not reflect the data (facts), or the desired societal outcome. A potential cause of this bias is ambiguity around the facts or the desired outcome used to resolve technical bias. In short, bias is part of the ML product itself, which is opposed to being adherent or adjacent as was historically the case with rules or values expressed as part of a Code of Conduct or Ethics.

One possible solution to bias and Value Inherence is a rules based approach in the form of updating a Code of Conduct with rules regarding Data Preparation, Model Training etc. Such an approach, however, shows the limits of rules. Specifically, rules break down and don’t generalize well given the complexities of the world. For example, the rule “All Birds Fly except Penguins, Ostriches, birds with broken wings and so on and so on” is replete with exceptions and counterfactuals. Rules based ethical systems suffer from the same problems. (See e.g., “Never tell a lie” v. the “Murder at door example; “Act to facilitate the greatest good for the greatest number” v. what is the greatest good, and for whom such that-“goods” of one group v. another).

The underlying problem is that rules based approaches don’t generalize well given the complexities of the world. Rules don’t generalize well because a rule is typically created from a previously seen example, and are meant to be applied to the same or similar factual scenarios. The complexity of the world requires that there will almost always be new, unanticipated factual scenarios. Bias may arise where facts or outcomes are not defined or well understood, hence if facts or outcomes are not known, then it may not be clear which rule (if any) applies.

Another possible solution for Value Inherence and bias is Abductive Reasoning. Abductive Reasoning is where one reasons to the most likely conclusion from a known set of facts. Abductive reasoning does well in complex problem spaces as it can adjust approaches to problem solving based upon observed, changing facts and provide a best guess as to an outcome (i.e., one with the highest probability). Further, Abduction allows for flexibility to approximate a true function even where facts and outcomes may be unclear which may result in bias. In abduction, one acts to facilitate the best guess as to the facts or facilitate a desired outcome as a prophylactic against bias (a “Best Judgment Approach”).

In a Best Judgment Approach, employees should be exposed to scenarios where the facts and desired outcomes are unclear. Where facts or outcomes are unclear, employees will need to develop multiple possible explanations, and use their best judgment to identify the most likely facts supporting a desired outcome. For example, “How to label an image to use to train a model where the image does (or does not) depict consensual sexual relations?” A solution is to train the labeler on a number of different scenarios that require the application of the Code of Ethics to render a labelling decision (a.k.a. using best judgment to render a decision).

Share

COinS
 
Apr 4th, 10:00 AM Apr 4th, 10:45 AM

Value Inherence, Abductive Reasoning, and Building Machine Learning Models that Reflect Ethical Decision Making

Wolff Auditorium, Jepson Center

Chaired by Logan Axon, Ph.D. (Gonzaga University)

Historically, businesses have had a Code of Conduct, and, in some cases, a Code of Ethics to guide employee internal and external behavior. When employees are faced with an ethical dilemma or an ambiguous situation not covered in the Code of Conduct, the principles articulated in the Code of Ethics help guide the decision maker. A classic example of ambiguity is in the use and applicability of non-disclosure agreements. A Code of Conduct rule might state “employees must not disclose confidential company information to outside parties” with the implication that this hold true even if it is related to potential unethical practices. By contrast, a Code of Ethics value might be a company is “committed to transparency and honesty in all business dealings.” To resolve the conflict between the rule and the value, an employee will need to reason to a conclusion.

The differences between Codes of Conduct and Ethics has historically not been that important when it comes to the actual creation of products, as products have tended to manifest a quality of Value Adherence or Value Adjacency. Value Adherence is the idea that the product as sold reflects a Code of Conduct or Ethics (e.g., “The customer is always right”), but itself does not contain the rule or value of a code. For example, the customer may always be right, but this does not impact the quality of the hamburger they have purchased. Value Adjacency is the idea that the product as sold and the rules or values do not overlap. Enron has a Code of Ethics, but this did not stop it from engaging in fraud.

In contrast to the concepts of Value Adherence and Value Adjacency, is the concept Value Inherence. With Value Inherence, the company’s (and its employees) values are directly incorporated into its product as part of the product itself. This is what makes ML based products different from the historical relationship between products and rules, values. The phenomena of Value Inherence raises the following question: i) Why is there Value Inherence?, and ii) How do we ensure that a company’s values are reflected in its ML based products?

To understand Value Inherence, one needs to first understand bias as a technical feature of ML. Bias is the “difference between this [an] estimators [ML Model] expected value and the true value of the parameters being estimated”. Further, “Any discussion of bias depends on the unknown true function”. Much of ML is dedicated to minimizing technical bias and finding the true function that fits the data (facts), or a desired outcome. Example strategies for solving technical bias include: a) Data preparation (e.g., Normalization Techniques, Data Labelling); or b) Model Training (e.g., Data Penalty Algorithms). However, these strategies can also facilitate bias because of the quality of Value Inherence where the ML’s expected value does not reflect the data (facts), or the desired societal outcome. A potential cause of this bias is ambiguity around the facts or the desired outcome used to resolve technical bias. In short, bias is part of the ML product itself, which is opposed to being adherent or adjacent as was historically the case with rules or values expressed as part of a Code of Conduct or Ethics.

One possible solution to bias and Value Inherence is a rules based approach in the form of updating a Code of Conduct with rules regarding Data Preparation, Model Training etc. Such an approach, however, shows the limits of rules. Specifically, rules break down and don’t generalize well given the complexities of the world. For example, the rule “All Birds Fly except Penguins, Ostriches, birds with broken wings and so on and so on” is replete with exceptions and counterfactuals. Rules based ethical systems suffer from the same problems. (See e.g., “Never tell a lie” v. the “Murder at door example; “Act to facilitate the greatest good for the greatest number” v. what is the greatest good, and for whom such that-“goods” of one group v. another).

The underlying problem is that rules based approaches don’t generalize well given the complexities of the world. Rules don’t generalize well because a rule is typically created from a previously seen example, and are meant to be applied to the same or similar factual scenarios. The complexity of the world requires that there will almost always be new, unanticipated factual scenarios. Bias may arise where facts or outcomes are not defined or well understood, hence if facts or outcomes are not known, then it may not be clear which rule (if any) applies.

Another possible solution for Value Inherence and bias is Abductive Reasoning. Abductive Reasoning is where one reasons to the most likely conclusion from a known set of facts. Abductive reasoning does well in complex problem spaces as it can adjust approaches to problem solving based upon observed, changing facts and provide a best guess as to an outcome (i.e., one with the highest probability). Further, Abduction allows for flexibility to approximate a true function even where facts and outcomes may be unclear which may result in bias. In abduction, one acts to facilitate the best guess as to the facts or facilitate a desired outcome as a prophylactic against bias (a “Best Judgment Approach”).

In a Best Judgment Approach, employees should be exposed to scenarios where the facts and desired outcomes are unclear. Where facts or outcomes are unclear, employees will need to develop multiple possible explanations, and use their best judgment to identify the most likely facts supporting a desired outcome. For example, “How to label an image to use to train a model where the image does (or does not) depict consensual sexual relations?” A solution is to train the labeler on a number of different scenarios that require the application of the Code of Ethics to render a labelling decision (a.k.a. using best judgment to render a decision).