Thinking Ethically

HMIA 2025

Thinking Ethically

HMIA 2025

"Readings"

PRE-CLASS

CLASS

Outline

  1. Preamble about course as "ethics course"
  2. How much can we learn from Wikipedia on ethics? Read three articles with an eye to being able to answer these questions.
  3. Class discussion builds on these answers to connect to AI and alignment.
  4. Post class will connect to our analysis framework and assignment options
  5. IMPORTANT: NEEDS TO FIGURE OUT HOW TO DO C, P, P, R, I Framework
  6. Activity TBD

HMIA 2025

HMIA 2025

How much can we learn from Wikipedia?

PRE-CLASS

HMIA 2025

STOP+THINK

Name and briefly explain the three major branches of ethics.

Ethics often involves resolving conflicts between equally valid principles. What's an example of that?

The Wikipedia article mentions moral relativism and moral universalism. Explain how they differ.

 

STOP+THINK

Professional ethics often grant experts both special authority and special obligations. Explain with an example.

Professions often define themselves through codes of ethics. What might the public be skeptical of that approach?

 

STOP+THINK

The article lists key concerns in AI ethics, such as bias, transparency, and accountability. Say briefly what each of these is.

Some AI ethicists argue for human-in-the-loop oversight, while others emphasize formal constraints or values built into systems. What do these terms mean?

 

PRE-CLASS

HMIA 2025

PRE-CLASS

Ethics

Normative Ethics

Descriptive Ethics

Meta-Ethics

Applied Ethics

Consequentialism

Deontology

Virtue Ethics

Categorical Imperative

Universalism

Relativism

Utilitarianism

Professional
Ethics

HMIA 2025

PRE-CLASS

HMIA 2025

PRE-CLASS

HMIA 2025

PRE-CLASS

HMIA 2025

PRE-CLASS

HMIA 2025

PRE-CLASS

HMIA 2025

PRE-CLASS

HMIA 2025

PRE-CLASS

Ethics as a Style of Thinking

HMIA 2025

CLASS

HMIA 2025

STOP+THINK

How do the three major branches of ethics—normative, meta-ethics, and applied ethics—map onto the problem of aligning AI systems with human values?

Ethics often involves resolving conflicts between equally valid principles (e.g., freedom vs. safety). How might such ethical dilemmas show up in the design of professional codes or AI systems?

The Wikipedia article mentions moral relativism and moral universalism. Why might this distinction matter for aligning diverse human communities—or machines trained on their data?

STOP+THINK

Professional ethics often grant experts both special authority and special obligations. Why might this dual structure be important for aligning expert intelligence with the public good?

Professions often define themselves through codes of ethics. What alignment problems might arise if these codes are vague, unenforced, or in conflict with institutional incentives?"

STOP+THINK

The article lists key concerns in AI ethics, such as bias, transparency, and accountability. Which of these concern alignment failures, and which concern the limits of our ability to assess alignment?

Some AI ethicists argue for human-in-the-loop oversight, while others emphasize formal constraints or values built into systems. What are the trade-offs between these approaches as alignment strategies?

PRE-CLASS

HMIA 2025

CLASS

HMIA 2025

CLASS

Let's Get Practical

HMIA 2025

CLASS

Pragmatics

HMIA 2025

CLASS

Pragmatics

HMIA 2025

CLASS

Pragmatics

So, you wanted to talk about something?

As you know, our team is preparing to launch a new productivity app. During testing, I've discovered that the app collects users' clipboard data—even when the app is running in the background—and uploads it to the company server.

This data could include sensitive information, such as passwords or personal messages.

HMIA 2025

CLASS

Pragmatics

The feature was added to enable a cross-device sync function, but users were not explicitly informed.

Why does this happen?

HMIA 2025

CLASS

Pragmatics

What's your take on this?

We’re in a competitive market where privacy concerns are increasingly important. Our company has previously marketed itself as a “privacy-conscious” alternative to larger tech firms. Failing to meet that standard now could feel like a betrayal to users. The engineering team may not have been aware of the ethical implications when they implemented this feature. It solved a problem we had at the time.

HMIA 2025

CLASS

Pragmatics

So, let's think through this. Who are the stakeholders here?

Well, users, for one. But then there's the product team - they're under pressure. And legal and leadership have a reason to be concerned too.

HMIA 2025

CLASS

Pragmatics

Users expect transparency and control over their personal data.

The product team might go bananas if we  about delays and diminished features.

Legal/compliance staff may worry about GDPR or other privacy violations.

The company leadership likely wants to maintain a good reputation and avoid liability.

Walk me through what they care about, as you see it.

HMIA 2025

CLASS

Pragmatics

That sounds right.  What are our options here?

We could (1) disable the clipboard sync until we’ve added clear consent and disclosure, (2) redesign the feature so it only works with explicit user activation, or (3) keep it as is, but that risks violating privacy expectations and regulations. 

HMIA 2025

CLASS

Pragmatics

So, how are you thinking about this? I mean, what are the ethics, not the practical implications.

Well, the principles at stake include respect for user autonomy (through informed consent), privacy, and professional responsibility. The feature, as currently implemented, violates user trust and potentially breaches data protection laws or standards.

HMIA 2025

CLASS

Pragmatics

In the short term, disabling the feature may delay the app launch or reduce a popular function. However, continuing with the current design risks user backlash, reputational damage, and legal exposure. Long-term trust with users is more valuable than the short-term benefit of seamless sync.

OK, now the pragmatics. What's the impact and consequences of our options here?

HMIA 2025

CLASS

Pragmatics

And you recommend what?

I recommend disabling the clipboard data collection feature until we implement proper user consent and disclosure.

HMIA 2025

CLASS

So, you wanted to talk about something?

As you know, our team is preparing to launch a new productivity app. During testing, I've discovered that the app collects users' clipboard data—even when the app is running in the background—and uploads it to the company server.

This data could include sensitive information, such as passwords or personal messages.

The feature was added to enable a cross-device sync function, but users were not explicitly informed.

Why does this happen?

What's your take on this?

We’re in a competitive market where privacy concerns are increasingly important. Our company has previously marketed itself as a “privacy-conscious” alternative to larger tech firms. Failing to meet that standard now could feel like a betrayal to users. The engineering team may not have been aware of the ethical implications when they implemented this feature. It solved a problem we had at the time.

HMIA 2025

CLASS

So, let's think through this. Who are the stakeholders here?

Well, users, for one. But then there's the product team - they're under pressure. And legal and leadership have a reason to be concerned too.

Users expect transparency and control over their personal data.

The product team might go bananas if we  about delays and diminished features.

Legal/compliance staff may worry about GDPR or other privacy violations.

The company leadership likely wants to maintain a good reputation and avoid liability.

Walk me through what they care about, as you see it.

That sounds right.  What are our options here?

We could (1) disable the clipboard sync until we’ve added clear consent and disclosure, (2) redesign the feature so it only works with explicit user activation, or (3) keep it as is, but that risks violating privacy expectations and regulations. 

HMIA 2025

CLASS

So, how are you thinking about this? I mean, what are the ethics, not the practical implications.

Well, the principles at stake include respect for user autonomy (through informed consent), privacy, and professional responsibility. The feature, as currently implemented, violates user trust and potentially breaches data protection laws or standards.

In the short term, disabling the feature may delay the app launch or reduce a popular function. However, continuing with the current design risks user backlash, reputational damage, and legal exposure. Long-term trust with users is more valuable than the short-term benefit of seamless sync.

OK, now the pragmatics. What's the impact and consequences of our options here?

And you recommend what?

I recommend disabling the clipboard data collection feature until we implement proper user consent and disclosure.

HMIA 2025

CLASS

HMIA 2025

PRE-CLASS

HMIA 2025

Resources

Downes, S. 2017. "An Ethics Primer" (blog post)

Wikipedia editors. "Ethics"

Wikipedia editors. "Professional Ethics"

Wikipedia editors. "Ethics of artificial intelligence"

HMIA 2025 Thinking Ethically

By Dan Ryan

HMIA 2025 Thinking Ethically

  • 87