Home Opinion/CommentaryWhen the numbers don’t add up: A statistical gambit at the human rights tribunal

When the numbers don’t add up: A statistical gambit at the human rights tribunal

by Todd Humber
A+A-
Reset

There’s a certain desperation that sets in when you’ve lost a case and the clock is ticking on your last chance to change the outcome. But when a medical resident in Ontario tried to overturn a human rights tribunal’s decision against her by arguing that statistics proved the adjudicator was biased, she wasn’t just grasping at straws. She was trying to weaponize mathematics itself.

The case involved serious allegations: sexual harassment during a residency program, threats of institutional reprisal, claims of discrimination based on race, ethnic origin and sex. The applicant had her day in court — actually, five days spread across nearly a year, all parties represented by counsel. The tribunal dismissed her application in September. She had 30 days to request reconsideration.

What she filed was unusual. Two tables. The first purported to be a “statistical method for assessing judicial bias.” It listed various adjudicators, calculated their applicant win rates, and flagged those who appeared to significantly favour respondents. The adjudicator who heard her case, according to this analysis, was one of them. The second table offered “Raw Data” to support these conclusions — a list of some adjudicators and some of their decisions between 2011 and 2025.

On its face, it sounds almost scientific. Data-driven. The kind of objective analysis that might cut through subjective impressions and reveal hidden patterns. In an age when we track everything from baseball swings to consumer behaviour, why shouldn’t we measure whether decision-makers are truly impartial?

The tribunal took a closer look at the numbers. They didn’t hold up.

The first table claimed the adjudicator had issued only 16 decisions. She had actually issued nearly 300. The majority of adjudicators listed had left the tribunal years earlier. Many current adjudicators and their decisions were missing entirely. The second table, supposedly the raw data underlying the first, included different adjudicators than the summary table it was meant to support.

But there was something else, a detail that undermines the entire exercise. Buried in the applicant’s own submission was this admission: “At the time of the merits hearing, HRTO Member Inbar had not yet issued a sufficient number of merits decisions to enable the statistical analysis in support of a request to recuse.”

At the time the hearing actually happened — the moment when a bias challenge would have mattered — there wasn’t enough data to make the statistical argument. The analysis only became possible after the decision was made, after the applicant lost.

The law on apprehension of bias asks whether a reasonable and informed person, viewing the matter realistically, would conclude that the decision-maker was more likely than not to decide unfairly. It’s not about outcomes. It’s about words and conduct during the proceeding itself — whether the adjudicator demonstrated an open mind to the evidence and arguments presented.

The applicant offered no such evidence. No questionable comments from the bench. No improper behaviour during the hearing. No indication that she wasn’t given a fair opportunity to present her case. Just the tables, and the assertion that they revealed bias.

The tribunal refused the reconsideration request. Reconsideration is not an appeal, not a chance to reargue a case. It’s a narrow remedy, available only when new evidence emerges that couldn’t have been obtained earlier, or when extraordinary circumstances outweigh the public interest in finality. None of those conditions existed here.

The applicant’s instinct — that patterns in decision-making might reveal bias — isn’t entirely wrong. Courts and tribunals have grappled with questions about whether certain decision-makers show troubling patterns. These are legitimate questions in a justice system that aspires to treat everyone equally.

But legitimacy requires rigour. It requires complete data, proper methodology, and analysis that can withstand scrutiny. Most importantly, it requires acknowledging what statistics can and cannot tell us. A win-loss record tells you nothing about whether the cases themselves had merit. An adjudicator who rules against applicants 80 per cent of the time might be biased — or might simply be hearing weak cases.

The applicant was self-represented in the reconsideration request, though she’d had counsel throughout the original hearing. When you’ve lost a case involving allegations this serious, and you believe the process was unfair, perhaps any tool seems worth using.

The tribunal’s decision is now final. The applicant can seek judicial review if she believes there were errors in law, but she cannot reopen the hearing based on statistical analysis that didn’t exist when it mattered and doesn’t hold up when examined.

Sometimes the math is simpler than we want it to be. Sometimes two plus two equals a loss, and no amount of creative accounting will change the sum.

You may also like