Ethics is not someone else's job
In many AI companies, ethics is treated as a governance function. A review board. A checklist before launch. A set of principles on the website that nobody references during actual product development.
This is a structural failure, and designers are uniquely positioned to address it. Not because we are ethicists — we are not — but because we control the layer where humans and AI actually interact. That layer is where ethical principles either become real or remain abstract.
Where design meets ethics
Consider a few concrete examples:
How confidence is displayed. If an AI system presents a 72% confidence score and a 98% confidence score with the same visual treatment, the operator cannot calibrate their trust. The designer who chose that visual treatment has made an ethical decision — whether they intended to or not.
What gets automated and what does not. When we decide which actions require human confirmation and which happen automatically, we are drawing a line of moral agency. This is not just a product decision. It is an ethical one.
Who sees what. Information architecture determines who has access to which data and in what context. A dashboard that surfaces one metric and buries another shapes organizational behavior. The designer who chose that hierarchy has influenced decisions downstream in ways they may never see.
How overrides work. If the system makes it difficult for a human to override an AI recommendation — through friction, social pressure, or buried UI — the designer has effectively transferred authority to the machine.
The designer's unique contribution
Engineers build the capability. Researchers develop the models. Product managers define the requirements. But the designer is the one who answers: How will a human experience this?
This question is inherently ethical. It asks:
- Will the person understand what the system is doing?
- Will they have meaningful control?
- Will they be able to exercise judgment?
- Will they feel the weight of their decisions appropriately — neither overwhelmed nor detached?
What this looks like in practice
At Helsing, I push for design to be present in ethical discussions from the beginning, not brought in to visualize decisions that have already been made. Concretely, this means:
- Designers participate in AI ethics reviews, not just engineering and policy teams
- We prototype ethical edge cases, not just happy paths — what does the interface look like when the AI is uncertain? When it is wrong? When the stakes are highest?
- We document the ethical rationale for design decisions, not just the functional rationale
- We test with operators specifically for scenarios where the AI might mislead
Ethics in AI is not a policy document. It is a design practice.
The uncomfortable truth
Designers in AI cannot claim neutrality. Every interface choice encodes a position on how much authority machines should have, how much transparency operators deserve, and how much friction should exist between recommendation and action.
We are already making these choices. The question is whether we are making them deliberately.