If AI chatbots are sentient, they can be squirrels, too PlatoBlockchain Data Intelligence. Vertical Search. Ai.

If AI chatbots are sentient, they can be squirrels, too

In Brief No, AI chatbots are not sentient.

Just as soon as the story on a Google engineer, who blew the whistle on what he claimed was a sentient language model, went viral, multiple publications stepped in to say he’s wrong.

The debate on whether the company’s LaMDA chatbot is conscious or has a soul or not isn’t a very good one, just because it’s too easy to shut down the side that believes it does. Like most large language models, LaMDA has billions of parameters and was trained on text scraped from the internet. The model learns the relationships between words, and which ones are more likely to appear next to each other.

It appears intelligent enough, and is able to answer questions correctly sometimes. But it knows nothing about what it’s saying, and has no real understanding of language or anything whatsoever really. Language models behave randomly. Ask it if it has feelings and it might say yes or no. Ask if it’s a squirrel, and it might say yes or no too. Is it possible AI chatbots might actually be squirrels?

FTC raises alarm on using AI for content moderation

AI is changing the internet. Realistic-looking photos are used in profiles of false accounts on social media, pornographic deepfake videos of women are circulating, images and text generated by algorithms are posted online.

Experts have warned these capabilities can increase the risks of fraud, bots, misinformation, harassment, and manipulation. Platforms are increasingly turning to AI algrorithms to automatically detect and remove bad content.

Now, the FTC is warning that these methods could make problems worse. “Our report emphasizes that nobody should treat AI as the solution to the spread of harmful online content,” Samuel Levine, director of the FTC’s Bureau of Consumer Protection, said in a statement.

Unfortunately, the technology can be “inaccurate, biased, and discriminatory by design”. “Combatting online harm requires a broad societal effort, not an overly optimistic belief that new technology — which can be both helpful and dangerous — will take these problems off our hands,” Levine said.

Spotify snaps up deepfake voice startup

Audio streaming giant Spotify has acquired Sonantic, a London-based upstart focused on building AI software capable of generating completely made-up voices.

Sonantic’s technology has been used for gaming and in Hollywood movies, helping give the actor Val Kilmer a voice in Top Gun: Maverick. Kilmer played Iceman in the action movie; his lines were uttered by a machine due to speaking difficulties after battling throat cancer.

Now, the same technology looks to be making its way to Spotify too. The obvious application would be using the AI voices to read audiobooks. Spotify, after all, acquired Findaway, an audiobook platform, last year in November. It will be interesting to see if listeners will be able to customise how they want their machine narrators to sound like. Maybe there will be different voices for reading aloud kids books compared to horror stories.

“We’re really excited about the potential to bring Sonantic’s AI voice technology onto the Spotify platform and create new experiences for our users,” Ziad Sultan, Spotify’s vice president of personalization, said in a statement. “This integration will enable us to engage users in a new and even more personalized way,” he hinted.

TSA testing AI software to automatically scan luggage

The US Transportation Security Administration will be testing whether computer vision software can automatically screen luggage to look out for items that look odd or are not allowed on flights.

The trial will take place in a lab and isn’t yet ready for real airports. The software works with the already existing 3D Computed Tomography (CT) imaging that TSA officers use to currently peek through people’s bags at security checkpoints. If agents see something suspicious-looking, they’ll take the luggage to one side and rifle through it.

AI algorithms can automate some of that process; they can identify objects and flag instances where they detect particular items.

“As TSA and other security agencies adopt CT, this application of AI represents a potentially transformative leap in aviation security, making air travel safer and more consistent, while allowing TSA’s highly trained officers to focus on bags that pose the greatest risk,” said Alexis Long, product director at Pangiam, the technology company working with the administration.

“Our aim is to utilize AI and computer vision technologies to enhance security by providing TSA and security officers with powerful tools to detect prohibitive items that may pose a threat to aviation security is a significant step toward setting a new security standard with worldwide implications.” ®

Time Stamp:

More from The Register