Non-profit builds site to track surging AI mishaps

Non-profit builds site to track surging AI mishaps

Non-profit builds site to track surging AI mishaps PlatoBlockchain Data Intelligence. Vertical Search. Ai.

Interview False images of Donald Trump supported by made-up Black voters, middle-schoolers creating pornographic deepfakes of their female classmates, and Google’s Gemini chatbot failing to generate pictures of White people accurately.

These are some of the latest disasters listed on the AI Incident Database – a website keeping tabs on all the different ways the technology goes wrong.

Initially launched as a project under the auspices of the Partnership On AI, a group that tries to ensure AI benefits society, the AI Incident Database is now a non-profit organization funded by Underwriters Laboratories – the largest and oldest (est. 1894) independent testing laboratory in the United States. It tests all sorts of products – from furniture to computer mouses – and its website has cataloged over 600 unique automation and AI-related incidents so far.

“There’s a huge information asymmetry between the makers of AI systems and public consumers – and that’s not fair”, argued Patrick Hall, an assistant professor at the George Washington University School of Business, who is currently serving on the AI Incident Database’s Board of Directors. He told The Register: “We need more transparency, and we feel it’s our job just to share that information.”

The AI Incident Database is modeled on the CVE Program set up by the non-profit MITRE, or the National Highway Transport Safety Administration’s website reporting publicly disclosed cyber security vulnerabilities and vehicle crashes. “Any time there’s a plane crash, train crash, or a big cyber security incident, it’s become common practice over decades to record what happened so we can try to understand what went wrong and then not repeat it.”

The website is currently managed by around ten people, plus a handful of volunteers and contractors that review and post AI-related incidents online. Heather Frase, a senior fellow at Georgetown’s Center for Security and Emerging Technology focused on AI Assessment and an AI Incident Database director, claimed that the website is unique in that it focuses on real-world impacts from the risks and harms of AI – not just vulnerabilities and bugs in software.

The organization currently collects incidents from media coverage and reviews issues reported by people on Twitter. The AI Incident Database logged 250 unique incidents before the release of ChatGPT in November 2022, and now lists over 600 unique incidents.

Monitoring problems with AI over time reveals interesting trends, and could allow people to understand the technology’s real, current harms.

George Washington University’s Hall revealed that roughly half of the reports in the database are related to generative AI. Some of them are “funny, silly things” like dodgy products sold on Amazon titled: “I cannot fulfill that request” – a clear sign that the seller used a large language model to write descriptions – or other instances of AI-generated spam. But some are “really kind of depressing and serious” – like a Cruise robotaxi running over and dragging a woman under its wheels in an accident in San Francisco.

“AI is mostly a wild west right now, and the attitude is to go fast and break things,” he lamented. It’s not clear how the technology is shaping society, and the team hopes the AI Incident Database can provide insights in the ways it’s being misused and highlight unintended consequences – in the hope that developers and policymakers are better informed so they can improve their models or regulate the most pressing risks.

“There’s a lot of hype around. People talk about existential risk. I’m sure that AI can pose very severe risks to human civilization, but it’s clear to me that some of these more real world risk – like lots of injuries associated with self driving cars or, you know, perpetuating bias through algorithms that are used in consumer finance or employment. That’s what we see.”

“I know we’re missing a lot, right? Not everything is getting reported or captured by the media. A lot of times people may not even realize that the harm they are experiencing is coming from an AI,” Frase observed. “I expect physical harm to go up a lot. We’re seeing [mostly] psychological harms and other intangible harms happening from large language models – but once we have generative robotics, I think physical harm will go up a lot.”

Frase is most concerned about the ways AI could erode human rights and civil liberties. She believes that collecting AI incidents will show if policies have made the technology safer over time.

“You have to measure things to fix things,” Hall added.

The organization is always looking for volunteers and is currently focused on capturing more incidents and increasing awareness. Frase stressed that the group’s members are not AI luddites: “We’re probably coming off as fairly anti-AI, but we’re not. We actually want to use it. We just want the good stuff.”

Hall agreed. “To sort of keep the technology moving forward, somebody just has to do the work to make it safer,” he said. ®

Time Stamp:

More from The Register