Amazon-powered AI cameras used to detect emotions of unwitting British train passengers

Network Rail did not answer questions about the tests sent by WIRED, including questions about the current status of AI use, emotion detection and privacy issues.

“We take the safety of the rail network extremely seriously and use a range of advanced technologies at our stations to protect passengers, our colleagues and the rail infrastructure from crime and other threats,” a Network Rail spokesperson said. “When we deploy technology, we work with police and security services to ensure we take proportionate action and that we always comply with relevant legislation regarding the use of surveillance technologies.”

It’s unclear how widely the emotion detection analysis was deployed, with the documents sometimes saying the use case “needs to be viewed with more caution” and reports from stations saying it is “impossible to validate accuracy.” However, Gregory Butler, the CEO of data analytics and computer vision company Purple Transform, which worked on the Network Rail trials, says the capability was stopped during testing and no images were saved when it was active.

The Network Rail documents on the AI ​​trials describe multiple use cases where the cameras could potentially send automatic alerts to staff when they detect certain behaviour. None of the systems use controversial facial recognition technology, which aims to match people’s identities with those in databases.

“A key benefit is the faster detection of violation incidents,” says Butler, who adds that his company’s analytics system, SiYtE, is in use at 18 locations, including train stations and along rail lines. Butler says there have been five serious violations in the past month detected by systems at two locations, including a teenager who collected a ball from the course and a man who “spent more than five minutes picking up golf balls along a high-speed line. line.”

At Leeds train station, one of the busiest outside London, 350 CCTV cameras are connected to the SiYtE platform, Butler says. “The analytics are used to measure people flow and identify issues such as crowding on platforms and of course violations – with the technology able to filter rail workers through their PPE uniforms,” he says. “AI helps human operators, who cannot continuously monitor all cameras, to quickly assess and address safety risks and issues.”

The Network Rail documents claim that cameras used at one station, Reading, allowed police to speed up investigations into bike thefts by being able to locate bikes in the footage. “It was determined that while analyzes could not definitively detect a theft, they could detect a person with a bicycle,” the files say. They also add that new air quality sensors used in the trials could save staff time in carrying out manual checks. One AI agency uses data from sensors to detect ‘sweaty’ floors, which have become slippery due to condensation, and alert staff when they need to be cleaned.

While the documents detail some elements of the processes, privacy experts say they are concerned about the overall lack of transparency and discussion around the use of AI in public spaces. In a document designed to assess data protection issues with the systems, Big Brother Watch’s Hurfurt says there appears to be a “dismissive attitude” towards people who may have privacy concerns. One question asks: “Are some people inclined to object or find it intrusive?” An employee writes: “Normally not, but for some people there is no accountability.”

At the same time, similar AI surveillance systems that use the technology to monitor crowds are increasingly being deployed around the world. During the Paris Olympics in France later this year, AI video surveillance will monitor thousands of people and attempt to detect crowds, use of weapons and objects left behind.

“Systems that don’t identify people are better than systems that do, but I worry about a slippery slope,” says Carissa Véliz, associate professor of psychology at the University of Oxford’s Institute for Ethics in AI. Véliz points to similar AI trials on the London Underground, where the faces of people potentially dodging fares were initially blurred, but then changed approach, photos became blurred and images were kept longer than initially planned.

“There is a very instinctive urge to expand surveillance,” says Véliz. “People want to see more, see further. But surveillance leads to control, and control to a loss of freedom that threatens liberal democracies.”

Leave a Comment