Series · Insights

Insights

Short, sharp dispatches on explainability, bias, and the decisions buried inside machine learning systems. Every piece ends with a question you can act on today — no PhD required.

WeeklyEvery Wednesday
Format5–15 min read
AudiencePractitioners & builders
All Insights 3 articles · more weekly

Insights · Bias & Accountability

The Shadow in Your Data

In 2019, a commercial algorithm deployed in US hospitals systematically underestimated the medical needs of Black patients — not by design, but by data. It called this fairness. The study that exposed it called it a defining case. Baba calls it a mirror.

Insights · Hallucination & Calibration

The Model Is Not Confident. It Is Calibrated.

LLMs hallucinate at rates from 9% for general queries to 75% for legal ones. MIT found they use more confident language when wrong than when right. 0.94 is not certainty — it is a performance. Here is what that means for every decision you act on.

Insights · Explainability & Law

The Shadow Has a Mandate Now

What operates in darkness does not vanish — it governs. The explainability mandates now threading through AI regulation are not bureaucratic friction. They are the oldest act of institutional self-knowledge, finally made compulsory. Here is why that is good for society, and profitable for those who build.