The Algorithm Doesn’t Owe You an Explanation
How facial recognition and AI systems are normalizing decisions no one has to justify.
Carol Myers was not accused of stealing. That was part of what unsettled her.
She had finished paying at a self-checkout in a Sainsbury’s in south London when a screen flashed red. An employee approached and told her she would need to be checked. Myers asked why. The employee could not say.
“My face popped up on the screen,” Myers told me. “Then when you pay, the system goes off and you need to get checked.”
The same thing happened weeks later at a Primark and H&M. And then again at a different location in Lewisham. Myers began noticing who was stopped and who was not.
“There were three white people in front of me,” she said. “They went through. The minute I went there, it happened.”
No one explained what she had been flagged for. No one explained how long the information would be kept or where else it might travel. No one had to.
Each time, the explanation was the same and delivered, as Myers described it, like a script. “It’s just a random check. It’s just the AI. It’s just doing it random, just to check.” She asked how it could be random if it kept happening to her, at different locations, on different days. No one had an answer for that either.
I am not British. I don’t live in London. But I am a Black man, and listening to Myers describe what happened to her felt like something I already knew— not because I had experienced the same technology, but because I recognized the structure. Someone in authority makes a decision about you. The decision is delivered without context. You are expected to comply first and understand later, if at all. The technology is new. The experience is not.
Myers is an IT teacher and a law student. She understood faster than most people would that what she was encountering was facial recognition, not a random check. But understanding the technology did not protect her from it. And it did not give her anywhere to direct her frustration.
“Who are you going to be upset at,” she asked. “The machine or the store clerk? They are doing their job.”
Myers compared the experience to stop and search encounters her sons began having when they were 12. In both cases, the reason was never personal. It was always the system.
But stop and search, however flawed, at least operated within a legal framework that required stated grounds. What happened to Myers required nothing. The machine decided and the employee delivered the news.
“It impacts you in a negative way,” she told me. “You’re spending your money and the algorithm is picking you as a Black person.”
She had just completed a transaction — the most ordinary, cooperative thing a person can do inside a store — and the system flagged her anyway. The receipt did not matter.
What Myers described mirrors facial-recognition systems now used across parts of the UK retail sector, including those operated by Facewatch, which runs in hundreds of stores. Myers had never heard of Facewatch before I mentioned it to her during our conversation. She did not consent to being scanned. She said she never saw a sign in any of the stores explaining that the technology was in use.
Retailers in the UK don’t need customer consent to use facial recognition. Campaigners argue that a small sign near an entrance tells shoppers very little about when they are being scanned, how long the data is kept, or where it goes next. These systems run on private contracts, not public law, but they rely on the same face-scanning logic used by police forces.
Metropolitan Police facial recognition systems in London wrongly flagged people as suspects in 80% of alerts in 2025. Eight out of 10 of those false alerts were Black people. Police scanned more than 3 million people across London in the past year alone.
Warren Rajah, a Black Londoner, was ejected from an Elephant & Castle Sainsbury’s in January 2026 after staff said the system flagged him as a shoplifter. He was not in the database. A spokesperson for Sainsbury’s told me the system had not made a mistake; an employee had stopped the wrong person. But Rajah had already been marked in front of other shoppers.
There is a distinction that seems small but changes everything: the difference between a wrong decision and an unexplained one. A wrong decision can, at least in theory, be corrected. An unexplained decision teaches you that correction was never part of the design.
Dr. Nessa Keddo, a senior lecturer at King’s College London who studies automation and creative labor, put it this way: “Algorithms decide what we see, who gets opportunities, how we’re categorized and judged.”
Keddo’s own path into this work started in advertising, where she spent more than a decade working on campaigns for brands like Disney and Adidas. She began noticing the problem from the back end of Meta’s business tools — who was included in audience measurement, who was excluded, and what happened when targeting became personal. Her clients were asking versions of the same questions her research would later formalize: what are we measuring, who is being left out, and what follows from that absence?
In creative industries, Keddo has watched algorithmic systems reshape labor without ever announcing the change. She told me entry-level workers, especially recent graduates, are often hit first. Roles that once required teams to brainstorm and build campaigns are compressed by automation. The jobs do not always disappear cleanly. They narrow. They thin out. Engagement drops. Opportunities dry up. People feel the consequences without being told what changed or whether it will change back. There is no one to appeal to and nothing clear to contest.
That dynamic becomes sharpest in policing, and it predates the current generation of AI tools.
Sara Chitseko, pre-crime program manager at Open Rights Group, was 16 when the Metropolitan Police began building the Gangs Matrix in 2012. She remembers it clearly because she had friends who were criminalized under it. The database emerged in the aftermath of the 2011 London riots, when Prime Minister David Cameron announced a “war on gangs” — language Chitseko describes as a dog whistle for Black youth.
At its peak in October 2017, the Matrix held 3,806 people. Seventy-eight percent were Black, even though Black Londoners made up only 27% of those convicted of serious youth violence. The vast majority of those listed were young Black men and boys. Being on the Matrix could get you denied housing, rejected from college and stripped of benefits. Young people often did not find out they were listed until a school, housing provider, or another institution told them they had been flagged. The police were not saying, You are on the Gangs Matrix. The consequences arrived first. The person affected had to work backward to figure out why.
Years later, after sustained pressure from communities and legal advocates, the Metropolitan Police admitted the database was unlawful and dismantled it. Chitseko said she was in her bedroom at her parents’ house when the news broke. Her phone lit up with messages. “Different people being like, have you seen it? It’s in the news,” she told me. “I think we’ve got there.”
But the victory came with a caveat. By the time the Matrix was dismantled, the data it contained had already been shared — with immigration, with housing providers, with other police systems. Trying to trace where it had all gone was, as Chitseko described it, “a web.” And when the Met announced it was replacing the Matrix with something called the Violence Harm Assessment, Chitseko recognized the logic immediately. Same structure. Different name.
“These systems don’t predict crime,” she said. “All they really predict is policing, because they’re based on past policing data.”
Historic police data, shaped by unequal enforcement, gets fed into models that reproduce those patterns while presenting the output as objective.
“If it’s discriminatory and unjust,” Chitseko said, “that’s just injustice delivered faster and at scale.”
Chitseko also cautioned against investing more money into making facial recognition more accurate. She said even if these systems were perfectly accurate, she still wouldn’t want to live in a world where her face was being scanned every day.
That matters because we keep talking about AI bias as though the deepest problem is accuracy, as though better training data or more representative datasets would solve what is happening here. But even a perfectly accurate system that never explains itself is still a system that has taken away your say in what happens to you. Accuracy does not restore what explanation would have provided.
The dismantling of the Gangs Matrix did not erase the data that had already been collected or undo every consequence of its use. But it interrupted a system that relied on silence. That interruption matters because it proved something these systems work hard to make us forget: their authority depends on consent, compliance, and the hope that no one asks too many questions.
I started reporting on technology to understand why it was not helping the people who needed it most. What I found, across two countries and several years, is that the tools already exist. They are everywhere. The question is who they are pointed at, and whether the people in their sights will ever be told why.
Myers told me she still shops in the same neighborhoods. She still pays attention to who is stopped and who is not.
“It’s devastating,” she said. “Everyone looks at you. It feels like you’ve done something wrong when you haven’t.”
Artificial intelligence did not create this vulnerability. It made it easier to enforce without explanation. But explanation is how power becomes visible. When it disappears, everything else becomes easier to take. ⁂
This essay builds on reporting for Coded Out, a three-part AI policy podcast I produced and hosted for The Black Policy Institute in London. The podcast featured interviews with Carol Myers, Dr. Nessa Keddo, and Sara Chitseko.










