Weird robot breaks down in middle of House of Lords hearing on AI art PlatoBlockchain Data Intelligence. Vertical Search. Ai.

Weird robot breaks down in middle of House of Lords hearing on AI art

In brief A freaky-looking humanoid robot wearing dungarees and named Ai-Da became the first machine to speak at a House of Lords committee hearing on AI art this week.

Gallery director Aiden Meller created Ai-Da and claimed it had “a combined collaborative persona” made up of “many algorithms … very different algorithms for very different outcomes,” including code for drawing, painting, speaking, and writing, The Guardian reported. Questions were prepared ahead of the hearing for the machine, and Ai-Da’s answers were thus scripted. 

If Ai-Da’s art skills are anything like her speaking performance at the hearing, they is probably quite atrocious. The robot took a long time to answer questions, and at one point even shut down randomly. Meller put a pair of sunglasses on her face while he rebooted her to avoid onlookers staring at the “quite interesting faces” she supposedly pulls while being reset.

You can watch the proceedings in the video below…

Youtube Video

Still, members of the House of Lords appeared to take Ai-Da at face value. Baroness Stowell, chair of the committee, called the hearing “a serious inquiry.” Baroness Bull also grilled Ai-Da on how she made art, and how it differed from human creations. 

Use AI to fill that PowerPoint with stuff

Microsoft this week, at its Ignite conference, teased a couple of developments powered by OpenAI’s DALL∙E 2 model that could be used in future by people to design documents and fill them with stock art and illustrations using artificial intelligence.

The first is Microsoft 365’s Designer, which can produce PowerPoint-ish-looking documents from text prompts. Just type what you want to see, and the model will generate it for you. Access to this cloud-hosted tool is, so far, available on request to Microsoft. In addition, a version of Designer will be integrated into the Edge browser at some point.

Text-to-image models are trained on large swathes of the internet, and can create inappropriate NSFW, biased, or toxic content, as well as stuff that looks pretty decent. “It’s important, with early technologies like DALL∙E 2, to acknowledge that this is new and we expect it to continue to evolve and improve,” Microsoft noted in its announcement.

“To help prevent DALL∙E 2 from delivering inappropriate results across the Designer app and Image Creator, we are working together with our partner OpenAI, who developed DALL∙E 2, to take the necessary steps and will continue to evolve our approach.”

Text prompts are screened, and certain words will block the model from generating any image.

AI brings Steve Jobs back to life in a fake podcast

Generative AI technology can create realistic images, text, and audio – well enough to create a completely fake podcast episode featuring what sounds like Joe Rogan and the late Steve Jobs.

Everything in this sound-only podcast episode is made up, from the voices to the questions posed by Rogan and answered by Jobs. The results are frankly bizarre, sometimes hilarious. It starts with Rogan’s clone voice announcing: “Welcome to the bro Jogan experience” before a fake Jobs is introduced.

Jobs talks about computing, spirituality, and health. It sounds convincing at first, but over time his voice sounds wooden and the conversation can feel disjointed at times. The episode was made by Play.ht, a London-based company that sells software to generate text-to-speech voices.

It will be interesting to see whether this takes off in the podcasting world. Is there any commercial value in listening to made-up content from dead celebrities? Is the content actually enjoyable or useful in any way? You can listen to the episode here. ®

Time Stamp:

More from The Register