Series · Insights
Short, sharp dispatches on explainability, bias, and the decisions buried inside machine learning systems. Every piece ends with a question you can act on today — no PhD required.
Insights · Bias & Accountability
In 2019, a commercial algorithm deployed in US hospitals systematically underestimated the medical needs of Black patients — not by design, but by data. It called this fairness. The study that exposed it called it a defining case. Baba calls it a mirror.
Insights · Hallucination & Calibration
LLMs hallucinate at rates from 9% for general queries to 75% for legal ones. MIT found they use more confident language when wrong than when right. 0.94 is not certainty — it is a performance. Here is what that means for every decision you act on.
Insights · Explainability & Law
What operates in darkness does not vanish — it governs. The explainability mandates now threading through AI regulation are not bureaucratic friction. They are the oldest act of institutional self-knowledge, finally made compulsory. Here is why that is good for society, and profitable for those who build.