Would AI Police Bots Be a Safer Alternative in the Future? PlatoBlockchain Data Intelligence. Vertical Search. Ai.

Would AI Police Bots Be a Safer Alternative in the Future?

Artificial intelligence (AI) is quickly becoming a valuable tool across most industries. These software systems can sort through massive amounts of data in a fraction of the time it would take a human analyst to do the same.

AI systems have become vital in health care and pharmaceutical research, retail, marketing, finance, and more. Some proponents have even considered the use of this technology in law enforcement. Would AI police bots be a safer alternative in the future?

“The most common use for AI in police work right now is facial recognition – monitoring individuals’ faces in areas that get a lot of traffic like train stations or significant events.” 

Benefits of AI in Police Work

The entire world is steeped in technology. Everything from the mobile phone in a pocket or purse to the smart tech making home life easier relies on one of the many programming languages that power the world. Modern police work is no exception. The most common use for AI in police work right now is facial recognition – monitoring individuals’ faces in areas that get a lot of traffic, like train stations or significant events.

Body cams collect massive amounts of video footage, all of which can be used as evidence of a crime. While reviewing this video footage manually is possible, it’s not an efficient option. An AI system can sort through the footage, looking for patterns or identifying information in a fraction of the time it would take for a human analyst to complete the same task.

“Robots could become valuable tools for evidence collection. Robots don’t have hair or skin cells that could contaminate a crime scene, ensuring a better chain of custody” 

Axon, the country’s largest producer of police body cameras, piloted such a program in 2014, claiming its AI could describe the events shown in body cam footage accurately. In 2019, they decided not to commercialize the programming. Testing showed that the programs provided unreliable and unequal identification across various races and ethnicities.

Robots could become valuable tools for evidence collection. Robots don’t have hair or skin cells that could contaminate a crime scene, ensuring a better chain of custody. It also means fewer errors that could prevent a conviction or leave an innocent person in jail accused of a crime they didn’t commit.

Unfortunately, security robots aren’t currently the smartest tools in the toolshed. A security robot named Steve was programmed to patrol the riverside Washington Harbor complex in Washington, D.C. Instead, it rolled down the stairs and attempted to drown itself in a fountain.

“Comprehensive oversight would be necessary to introduce any AI-powered police robot to the force” 

Dangers of AI Police Bots

The most significant risk lies in the fact that humans still program these devices and, as such, are subject to human biases. How can programmers create an unbiased AI system when teaching these networks how to think and complete their tasks? It’s the same problem that self-driving cars are facing with the Trolley Problem thought exercise. Do they teach the system to kill one person to save five, or simply do not decide?

Comprehensive oversight would be necessary to introduce any AI-powered police robot to the force. Without this oversight, there is little to nothing preventing someone from programming these systems to target BIPOC individuals. Tools designed to save lives and prevent crime could easily be twisted into tools of oppression.

Google already showed this was a possibility in 2017 with one of the earliest incarnations of their natural language processing AI. Homophobia was written into its core programming, giving negative ratings to any searches that involved the word “gay.” Google has since fixed the issue and apologized for the programmed bias. Still, this error showcases how easy it would be for this technology to be abused, even if the bias isn’t conscious.

Artificial intelligence has long been the favoured antagonist in futuristic entertainment. Audiences have seen it featured in many franchises over the years, and almost all of these antagonists have one thing in common: the AI determines that humans are no longer capable of caring for themselves, so they take extreme measures – including destroying the human race – to “protect” humanity from itself.

AI police bots likely wouldn’t go that far, but it’s not the AI that sparks fear – it’s the worry that the technology could be exploited, leaving vulnerable communities at risk.

Creating a Safer Future

Artificial intelligence and the tools it supports are becoming a part of our daily life, but that doesn’t mean the technology is without risks. AI could help protect people by preventing or solving crimes, but it could easily be exploited to harm BIPOC and communities.

AI police bots are probably the wave of the future, but they need a lot of work before communities can trust them to protect everyone regardless of colour or creed.

Also, Read Future of Robotics in Domestic Field

Time Stamp:

More from AIIOT Technology