Should AI Have The Power To Enforce Laws? Police To Use AI To Identify Bad Drivers
21-10-2022 | By Robin Mitchell
Recently, Police in Devon and Cornwall have announced the use of a new technology that leverages AI to automatically identify those committing driving offences, including using a mobile at the wheel and not using a seatbelt. What challenges does bad driving present, what technology will the police use to fight against this, and should we give AI so much power?
What challenges does bad driving present?
Despite the clear dangers presented with bad driving habits, thousands of motorists commit driving offences on a daily basis. Whether using a mobile at the wheel, not wearing a seatbelt, or opening their door with the wrong hand, these offences significantly increase the risk of injury and/or death which is why they are banned in the first place. In some cases, many feel that they are fully aware of their surroundings and sure of their driving skills, while others simply do not care outright about the safety of others.
To understand the dangers of using a mobile while driving, one only has to turn to Tomasz Kroker, who, in 2016, killed a mother, her two sons, and her stepdaughter. While driving a heavy goods vehicle at speed on the motorway, Tomasz Kroker felt that his driving skills were perfectly adequate, and as such, he decided to use his mobile to check Facebook updates. However, in the seconds that he was transfixed to his screen, the lorry plunged into the back of the family’s car and then crushed that car into the back of another heavy goods vehicle.
Statistics on seatbelts clearly demonstrate their importance in road safety. Despite only 7% of road users not wearing seatbelts, over 34% of fatalities in collisions result from not wearing a seatbelt. Furthermore, seatbelts help to provide drivers with stability when operating vehicles in dangerous scenarios such as skidding.
But to really emphasise the importance of the dangers of bad driving, stupidity often gets others killed. Those who drunk drive put pedestrians and other drivers at more risk than themselves, those using mobiles are far less likely to get hurt than those they hit, and those not using the Dutch reach (where you use the opposing hand to open a door) put bikers at serious risk of injury.
Police announce new AI technologies for spotting bad driving habits
Recognising the dangers presented by bad driving habits, police in Devon and Cornwall have recently announced new plans to introduce AI vision systems to automatically identify offending drivers. The system, developed using Acusensus technology, is fitted into a mobile unit that utilises numerous cameras to record footage of passing motorists. From there, the video stream is then fed into an AI that can be used to automatically identify potential cases of either using a phone at the wheel or not wearing a seatbelt. Simultaneously, the system can also determine the speed at which the vehicle travels, enabling police officers to further prosecute bad drivers.
Once the proposed offences have been determined by the AI, the results are passed onto police officers, who can verify the nature of the offence, and from there, either send letters to offenders about the incident or prosecute in a court of law. It is hoped that the use of the new technology will not only deter drivers from using mobile phones while driving but also punish those who have no regard for the safety of others.
Should AI be given power in prosecutions?
The technology deployed by police in Devon and Cornwall is not that dissimilar to speed and traffic cameras that are widely used across the UK. But while these cameras focus on specific offences, the new technology utilises AI to try and identify potential offences. While this may be beneficial in catching more criminals, it also introduces serious challenges regarding surveillance and evidence.
The first consideration that has to be taken into account is to what degree the evidence gathered by AI is admissible in court. As things currently stand, any evidence generated by an AI would rightly be thrown out of court, but as AI becomes an increasingly more important technology in modern society, it introduces the likelihood of AI evidence becoming perfectly legitimate in court. Considering how AI can easily be manipulated, this runs the risk of false prosecutions arising as a result of corruption in AI design.
The second consideration that needs to be considered is the introduction of a Big Brother state. Currently, it is up to human eyes to identify crime, but while cameras allow police to monitor remotely, the use of AI eliminates the human element. This quickly enables an entire area to be monitored by cameras 24/7, and this constant surveillance puts every individual under a microscope which could lead to unfair police practices. For example, litter that falls out of a pocket by accident could be flagged by the AI as a littering crime, and it would be in the interest of the police to prosecute as it will either ensure future funding, improve police statistics, or demonstrate the strength of the local police.
Overall, integrating AI into police systems should be done with care, and just because humans are able to advance technology doesn’t mean we should try to implement it into every aspect of daily life. In fact, it may become important in the near future to write down a constitution or declaration of digital independence that helps to create clear boundaries that technology is simply not allowed to cross.