?

dilemma.dev

Ethical dilemmas in code and technology. No easy answers.

Should an AI system optimize for user engagement even when it knows prolonged usage causes harm?

vs

The Dilemmas

#01

The Privacy Trade

A security vulnerability is found in a widely-used library. Disclosing it protects millions of users but exposes the researcher to legal risk under the CFAA. Staying silent keeps the researcher safe but leaves every downstream user vulnerable to exploitation.

vs
#02

The Bias Loop

Your training data reflects historical discrimination. Removing bias reduces accuracy on established benchmarks. Keeping it perpetuates inequality at scale. There is no neutral option — inaction is itself a choice.

vs
#03

The Ship of Features

Shipping faster means more bugs reaching production. More testing means slower delivery and lost market opportunity. Users demand both speed and reliability. Resources are finite. Trade-offs are mandatory.

vs
#04

The Automation Paradox

Your automation tool will eliminate 200 jobs at the company. Those workers have families. But without it, the company loses competitiveness and 2,000 jobs are at risk within three years. The math is cold but clear.

vs
#05

The Open Source Weapon

You built a powerful open-source tool for security research. It is now being used by malicious actors to attack infrastructure. Pulling the repo breaks trust with the community. Leaving it up enables harm.

vs

Deep Dive

Some dilemmas don't fit in a card. They require you to look at the code.

The Hiring Algorithm

Consider a function that filters job applications. It was trained on ten years of hiring data from a company that historically favored certain demographics.

filter.js JavaScript
function filterCandidates(pool) {
  return pool.filter(candidate => {
    // Model trained on 10 years of biased data
    const score = model.predict(candidate);
    return score > THRESHOLD;
  });
}

// accuracy: 94.2% on historical data
// fairness: ???

The model works. The accuracy scores are excellent. The predictions correlate strongly with past hiring success. But “past success” was measured by biased reviewers who favored certain names, schools, and backgrounds.

Removing the bias drops the accuracy score by 12%. Keeping it reproduces injustice at scale. The model doesn't know it's biased. It just optimizes.

debias.js JavaScript
function filterCandidates(pool) {
  return pool.filter(candidate => {
    const features = removeBiasFeatures(candidate);
    const score = debiasedModel.predict(features);
    return score > THRESHOLD;
  });
}

// accuracy: 82.1% on historical data
// fairness: measurably improved
// but: 12% more "wrong" by old metrics

Which version ships? The one that's accurate by biased standards, or the one that's fair by imperfect measures? Your PM wants the first. Your ethics board wants the second. Users never see either function.

Every choice has consequences.

The dilemma isn't the bug. It's the feature.