csc 510-001, (1877)
fall 2024, software engineering
Tim Menzies, timm@ieee.org, com sci, nc state


home :: syllabus :: corpus :: groups :: moodle :: license

Ethics

It is ethical to improve the revenue of your company since that money becomes wages which becomes groceries which becomes dinner so everyone and their kids can sleep better at night.

It is also ethical to change the design of software in order to ensure that (say) the software is not unduly discriminatory towards a particular social group (e.g. some groups characterized by age, race, or gender).

Ethics

SE is about choice.

What if those choices have ethical implications?

Maybe, maybe not.

Software has an increasing impact of modern society.

Unethical Software?

Is it doing so, responsibly and fairly? Maybe not.

Model stores

Model Store models have no tools for (a) detecting or (b) mitigating discriminatory bias. - This is troubling these models are susceptible to issues of unfairness. - That is, using Model Stores, developers can unwittingly unleash models that are highly discriminatory. - Worse yet, given the ubiquity of the internet, these discriminatory models could be unleashed on large sections of the general public.

Examples:

Kinds of Ethics

The Institute for Electronics and Electrical Engineers (IEEE) has recently discussed general principles for implementing autonomous and intelligent systems (A/IS). They propose that the design of such A/IS systems satisfy certain criteria:

Other organizations, like Microsoft offer their own principles for AI:

Nevertheless, the following table shows one way we might map together these two sets of ethical concerns. Note that:

The reader might dispute this mapping, perhaps saying that we have missed, or missed out, or misrepresented, some vital ethical concern. This would be a good thing since that would mean you are now engaging in discussions about software and ethics. In fact, the best thing that could happen below is that you say “that is wrong; a better way to do that would be…” As George Box said, all models are wrong; but some are useful.

In any case, what the above table does demonstrate is that:

Explore the choices

Ethics: Take the lead

From the IEEE:

How to fix Bias? (with algorithms)

How not to fix

Why not just remove the protected attribute (age, gender,etc) - Empirically, does not work - We tried it - Almost no change in bias metrics even after that. - Why? - attributes are connected - so removing on thing still keeps the bias in the all the others. - e.g. 2016, Amazon Prime same day delivery. - highly discriminatory against black neighborhood - excluded minority neighborhoods in Boston, Atlanta, Chicago, Dallas, New York City, and Washington, D.C., - while extending the service to white neighborhoods - Model trained on “zip code” which can be a surrogate for “race” (given racial separation in many major US cities). - Poor observed correlation zip code to race - But connected via the labels “good Delvers” “slower Delivery”

Recognize that bias is inevitable

The “best” model is assesses using criteria C1,C2,C3,C4…. - If we build models optimizing for C1,C2 (and ignroethe rest) - Then it is a random variable whether or not it satisfies C3,C4…. The thing is, all the above a huge assembly of choices made by software engineers.

So measure and check for bias

  truth   |
no  | yes | learner
----------------------
TN  | FN  | silent
FP  | TP  | loud

Divide data into groups (e.g. divide on gender, age, nationality, anything really)

Demo: AI360

More generally

Software contains choices.

SE people make choices.

SE people can make bad choices or better choices

Not clear that legal and political institutions are keeping up with the technology choice space in this area. - So It is up to us.

Case Studies

https://www.scu.edu/ethics/focus-areas/technology-ethics/resources/technology-ethics-cases/

https://onlineethics.org/resources?combine=software&field_keywords_target_id=&field_resource_type_target_id=13236

References