Article

London Underground implements AI to monitor the underground network

London Underground has tested real-time AI surveillance tools to detect potential risk or conflict situations and enable staff to intervene quickly. Detects aggressive behavior and identifies people who sneak in without paying.

London Underground has conducted a year-long trial of artificial intelligence (AI)-based surveillance tools to detect crime, weapons, falls on tracks and fare evasion, according to documents obtained by WIRED.

The computer vision system was combined with live CCTV footage to generate alerts that were sent to station staff.

This is the first time the London transport agency has used AI and live video to improve the safety and efficiency of its underground network.

11 algorithms

The test was carried out at Willesden Green station, in the north-west of the city, where 11 different algorithms were used to monitor the behavior and movements of people passing through the station.

The objective was to detect possible risk or conflict situations and allow staff to intervene quickly. More than 44,000 alerts were issued during the test, of which 19,000 were delivered to station staff in real time.

computer vision

The documents, which are partially redacted, show how various computer vision models were used to detect behavioral patterns at the station.

These include the recognition of wheelchairs, strollers, electronic cigarettes, people who access unauthorized areas or who put themselves in danger by approaching the edge of platforms.

The system was also designed to detect aggressive behavior and knives or firearms, as well as to identify people who jumped payment barriers.

The system generated alerts for “rough sleepers and beggars” at the station’s entrances and this allowed staff to “remotely monitor the situation and provide the necessary care and assistance”, according to the documents revealed.

Smart stations

The test is part of the “Smart Stations” project, which aims to harness the benefits of AI to improve user experience and network performance.

However, the initiative has also raised concerns about the privacy and reliability of the technology, as the system can make errors or be used for improper purposes.

For example, the system mistook children following their parents through the barriers for people who had snuck in without paying, or could not distinguish between a folding bicycle and a non-folding one.

dark areas

Furthermore, the degree of consent and information that was provided to people who were subjected to AI scrutiny is unknown.

During all tests, images of people’s faces appeared blurred and data was retained for up to 14 days. However, six months after the start of the trial, the system’s administration decided to blur images of faces when people were suspected of not paying, and kept that data longer.

To avoid false positives or rights violations, the system is designed so that a human operator reviews the AI’s decisions before taking any action.

In this way, the aim is to establish a balance between the use of technology and respect for privacy. According to Transport for London (TfL), the body responsible for the underground, the test has been a success and plans are being made to expand the use of AI to more stations in the future.

Lack of precision

The London Underground trial is an example of how AI can impact the safety and efficiency of public services, but also raises ethical and legal challenges.

There is still a lack of clear principles and standards to ensure that AI is used responsibly and transparently, and that people’s rights and freedoms are protected… even in a subway station on a normal day when nothing important ever happens.

https://battlersauctions.com

Post Comment