"Readings"
PRE-CLASS
CLASS
PRE-CLASS
STOP+THINK
Name and briefly explain the three major branches of ethics.
Ethics often involves resolving conflicts between equally valid principles. What's an example of that?
The Wikipedia article mentions moral relativism and moral universalism. Explain how they differ.
STOP+THINK
Professional ethics often grant experts both special authority and special obligations. Explain with an example.
Professions often define themselves through codes of ethics. What might the public be skeptical of that approach?
STOP+THINK
The article lists key concerns in AI ethics, such as bias, transparency, and accountability. Say briefly what each of these is.
Some AI ethicists argue for human-in-the-loop oversight, while others emphasize formal constraints or values built into systems. What do these terms mean?
PRE-CLASS
PRE-CLASS
Ethics
Normative Ethics
Descriptive Ethics
Meta-Ethics
Applied Ethics
Consequentialism
Deontology
Virtue Ethics
Categorical Imperative
Universalism
Relativism
Utilitarianism
Professional
Ethics
PRE-CLASS
PRE-CLASS
PRE-CLASS
PRE-CLASS
PRE-CLASS
PRE-CLASS
PRE-CLASS
CLASS
STOP+THINK
How do the three major branches of ethics—normative, meta-ethics, and applied ethics—map onto the problem of aligning AI systems with human values?
Ethics often involves resolving conflicts between equally valid principles (e.g., freedom vs. safety). How might such ethical dilemmas show up in the design of professional codes or AI systems?
The Wikipedia article mentions moral relativism and moral universalism. Why might this distinction matter for aligning diverse human communities—or machines trained on their data?
STOP+THINK
Professional ethics often grant experts both special authority and special obligations. Why might this dual structure be important for aligning expert intelligence with the public good?
Professions often define themselves through codes of ethics. What alignment problems might arise if these codes are vague, unenforced, or in conflict with institutional incentives?"
STOP+THINK
The article lists key concerns in AI ethics, such as bias, transparency, and accountability. Which of these concern alignment failures, and which concern the limits of our ability to assess alignment?
Some AI ethicists argue for human-in-the-loop oversight, while others emphasize formal constraints or values built into systems. What are the trade-offs between these approaches as alignment strategies?
PRE-CLASS
CLASS
CLASS
Let's Get Practical
CLASS
Pragmatics
CLASS
Pragmatics
CLASS
Pragmatics
So, you wanted to talk about something?
As you know, our team is preparing to launch a new productivity app. During testing, I've discovered that the app collects users' clipboard data—even when the app is running in the background—and uploads it to the company server.
This data could include sensitive information, such as passwords or personal messages.
CLASS
Pragmatics
The feature was added to enable a cross-device sync function, but users were not explicitly informed.
Why does this happen?
CLASS
Pragmatics
What's your take on this?
We’re in a competitive market where privacy concerns are increasingly important. Our company has previously marketed itself as a “privacy-conscious” alternative to larger tech firms. Failing to meet that standard now could feel like a betrayal to users. The engineering team may not have been aware of the ethical implications when they implemented this feature. It solved a problem we had at the time.
CLASS
Pragmatics
So, let's think through this. Who are the stakeholders here?
Well, users, for one. But then there's the product team - they're under pressure. And legal and leadership have a reason to be concerned too.
CLASS
Pragmatics
Users expect transparency and control over their personal data.
The product team might go bananas if we about delays and diminished features.
Legal/compliance staff may worry about GDPR or other privacy violations.
The company leadership likely wants to maintain a good reputation and avoid liability.
Walk me through what they care about, as you see it.
CLASS
Pragmatics
That sounds right. What are our options here?
We could (1) disable the clipboard sync until we’ve added clear consent and disclosure, (2) redesign the feature so it only works with explicit user activation, or (3) keep it as is, but that risks violating privacy expectations and regulations.
CLASS
Pragmatics
So, how are you thinking about this? I mean, what are the ethics, not the practical implications.
Well, the principles at stake include respect for user autonomy (through informed consent), privacy, and professional responsibility. The feature, as currently implemented, violates user trust and potentially breaches data protection laws or standards.
CLASS
Pragmatics
In the short term, disabling the feature may delay the app launch or reduce a popular function. However, continuing with the current design risks user backlash, reputational damage, and legal exposure. Long-term trust with users is more valuable than the short-term benefit of seamless sync.
OK, now the pragmatics. What's the impact and consequences of our options here?
CLASS
Pragmatics
And you recommend what?
I recommend disabling the clipboard data collection feature until we implement proper user consent and disclosure.
CLASS
So, you wanted to talk about something?
As you know, our team is preparing to launch a new productivity app. During testing, I've discovered that the app collects users' clipboard data—even when the app is running in the background—and uploads it to the company server.
This data could include sensitive information, such as passwords or personal messages.
The feature was added to enable a cross-device sync function, but users were not explicitly informed.
Why does this happen?
What's your take on this?
We’re in a competitive market where privacy concerns are increasingly important. Our company has previously marketed itself as a “privacy-conscious” alternative to larger tech firms. Failing to meet that standard now could feel like a betrayal to users. The engineering team may not have been aware of the ethical implications when they implemented this feature. It solved a problem we had at the time.
CLASS
So, let's think through this. Who are the stakeholders here?
Well, users, for one. But then there's the product team - they're under pressure. And legal and leadership have a reason to be concerned too.
Users expect transparency and control over their personal data.
The product team might go bananas if we about delays and diminished features.
Legal/compliance staff may worry about GDPR or other privacy violations.
The company leadership likely wants to maintain a good reputation and avoid liability.
Walk me through what they care about, as you see it.
That sounds right. What are our options here?
We could (1) disable the clipboard sync until we’ve added clear consent and disclosure, (2) redesign the feature so it only works with explicit user activation, or (3) keep it as is, but that risks violating privacy expectations and regulations.
CLASS
So, how are you thinking about this? I mean, what are the ethics, not the practical implications.
Well, the principles at stake include respect for user autonomy (through informed consent), privacy, and professional responsibility. The feature, as currently implemented, violates user trust and potentially breaches data protection laws or standards.
In the short term, disabling the feature may delay the app launch or reduce a popular function. However, continuing with the current design risks user backlash, reputational damage, and legal exposure. Long-term trust with users is more valuable than the short-term benefit of seamless sync.
OK, now the pragmatics. What's the impact and consequences of our options here?
And you recommend what?
I recommend disabling the clipboard data collection feature until we implement proper user consent and disclosure.
CLASS
PRE-CLASS
Resources
Downes, S. 2017. "An Ethics Primer" (blog post)
Wikipedia editors. "Ethics"
Wikipedia editors. "Professional Ethics"
Wikipedia editors. "Ethics of artificial intelligence"