Mar 31, 2026
HW 04 due TODAY at 11:59pm
Project: Preliminary analysis due April 7
Statistics experience due April 15
Cross validation for logistic regression
Which model do you select?
“…coverage of polls [data science] often does not adequately convey the many decisions that pollsters [data scientists] must make … as well as the potential consequences of those decisions”
Desire for “ideal objectivity”
Work that is free from interference of human perception (Feinberg 2023)
Conclusions that are observer independent (Gelman and Hennig 2017)
Feels like the ethical approach (Feinberg 2023)
There are implications (Feinberg 2023)
Distorted reality of data analysis process
Lack of communication about decisions that don’t seem “objective”
Illuminates the the decision-making in data science
Makes explicit the existence of choice throughout an analysis
Makes explicit the moral implications of our choices
Align data science practices with what we ought to do and moral duties to stakeholders




\[ \log(\frac{\pi}{1-\pi}) = \beta_0 + \beta_1 ~ \text{age} + \beta_2 ~ \text{number clicks} + \beta_3 ~ \text{avg time between clicks} \]


Source: https://scolando.github.io/data-science-ethics/Intro-DS-ethics.html
What are some decisions you have made (or will make) in your project or other data analysis work?
Data science ethics studies and evaluates moral problems related to:
Data: generation, recording, process, dissemination, and sharing
Algorithms: artificial intelligence, machine learning, large language models, and statistical learning models
Corresponding practices: responsible innovation, programming, hacking, professional code
Bias, fairness, and justice
Causation
Data privacy and informed consent
Explainability, interpretability, and transparency
Responsibility
Applications in government and policing
Professional ethics
Reproducibility
..and more
I will not be ashamed to say, “I know not,” nor will I fail to call in my colleagues when the skills of another are needed for solving a problem.
I will respect the privacy of my data subjects, for their data are not disclosed to me that the world may know, so I will tread with care in matters of privacy and security.
I will remember that my data are not just numbers without meaning or context, but represent real people and situations, and that my work may lead to unintended societal consequences, such as inequality, poverty, and disparities due to algorithmic bias.
From National Academies of Sciences, Engineering, and Medicine (2018) based on the Hippocratic Oath for physicians.
The data scientist is (morally) responsible for
The user is (morally) responsible for
Angela Lipps, from Tennessee, spent 5 months in jail after facial recognition software connected her to a string of bank fraud cases in Fargo, North Dakota
Facial recognition company, Clearview AI uses “publicly available images” for its data base and to train its models used by law enforcement agencies
Their statement on how models should be used:
“Once a search is performed, the search may return a set of potential leads, which the investigator is required to independently verify by both peer review and other means, before continuing with their investigation.”
Website reports “99% accuracy for all demographics”
What are the ethical considerations for the data scientists building such facial recognition technology?
What are the ethical responsibilities of the data scientists building such facial recognition technology?
What are the ethical considerations for those using such facial recognition technology?
What are the ethical responsibilities of those using such facial recognition technology?
A data analyst received permission to analyze a data set that was scraped from a social media site. The full data set included name, screen name, email address, geographic location, IP (internet protocol) address, demographic profiles, and preferences for relationships.
What are ethical considerations of putting a deidentified data set with name and email address removed in a LLM (e.g., Claude or ChatGPT) to help with analysis?
When things go wrong examples:
Cambridge Analytica made ‘ethical mistakes’ because it was too focused on regulation, former COO says from Vox.com
How big data is helping states kick poor people off welfare from Vox.com
How AI researchers uncover ethical, legal risks to using popular data sets from the Washington Post
Boeing’s manufacturing, ethical lapses go back decades from the Seattle Times
Gap in public trust examples
Can we trust the polls this year? from Vox.com
Polling problems and why we should still trust (some) polls from Vanderbilt Project on Unity and American Democracy
So, can we trust the polls? from New York Times
Can we still trust the polls? from University of Southern California
Special topic: Causal inference
Complete Lecture 21 prepare