OpenAI's Sam Altman can't go on dinner dates anymore

OpenAI’s Sam Altman can’t go on dinner dates anymore

OpenAI's Sam Altman can't go on dinner dates anymore PlatoBlockchain Data Intelligence. Vertical Search. Ai.

Spare a thought for poor Sam Altman. Amid the ongoing drama that is OpenAI, with high-profile departures and regulators worrying about safety, the CEO is complaining that his fame put an end to impromptu dining in his own city.

Excuse us while we break out the world’s smallest violin.

It has been a difficult period for Altman. In the week after his interview on The Logan Bartlett Show was published, OpenAI co-founder Ilya Sutskever announced he was leaving the company. Jan Leike, who was co-leading OpenAI’s Superalignment team, also resigned.

International Monetary Fund managing director Dr Kristalina Georgieva had earlier warned of the “tsunami” hitting the global labor market as businesses adopt AI technologies.

On May 17, Leike posted a lengthy thread on X (formerly Twitter) explaining his departure.

After praising his team and the talent within OpenAI, Leike said: “I have been disagreeing with OpenAI leadership about the company’s core priorities for quite some time, until we finally reached a breaking point.

“I believe much more of our bandwidth should be spent getting ready for the next generations of models, on security, monitoring, preparedness, safety, adversarial robustness, (super)alignment, confidentiality, societal impact, and related topics.

“Building smarter-than-human machines is an inherently dangerous endeavor … But over the past years, safety culture and processes have taken a backseat to shiny products.”

Leike’s comment prompted an equally lengthy response from OpenAI co-founder Greg Brockman, who took full advantage of last year’s increase in tweet length to post a hefty statement on how OpenAI is taking the question of safety seriously.

In between extolling the virtues of OpenAI’s product and processes, Brockman said: “We have been putting in place the foundations needed for safe deployment of increasingly capable systems.

“We know we can’t imagine every possible future scenario. So we need to have a very tight feedback loop, rigorous testing, careful consideration at every step, world-class security, and harmony of safety and capabilities. We will keep doing safety research targeting different timescales.

“We are also continuing to collaborate with governments and many stakeholders on safety.”

As well OpenAI might. On May 20, UK Technology Secretary Michelle Donelan announced the UK AI Safety Institute’s first overseas office in San Francisco. It also published findings after assessing AI models against four risk areas, “including how effective the safeguards that developers have installed actually are in practice.”

The answer was not so much. While the models that were oyt through their paces struggled to complete complicated problems without the oversight of humans, “all tested models remain highly vulnerable to basic ‘jailbreaks,’ and some will produce harmful outputs even without dedicated attempts to circumvent safeguards.”

Seven days is a long time in AI technology. A week ago, Altman was on a podcast talking about GPT-4o and commenting on his loss of anonymity. Now the company is dealing with the fallout from resignations and scrutiny over its safety culture.

Perhaps we need more than one tiny violin. ®

Time Stamp:

More from The Register