Categories: Tech & Ai

xAI’s promised safety report is MIA


Elon Musk’s AI company, xAI, has missed a self-imposed deadline to publish a finalized AI safety framework, as noted by watchdog group The Midas Project.

xAI isn’t exactly known for its strong commitments to AI safety as it’s commonly understood. A recent report found that the company’s AI chatbot, Grok, would undress photos of women when asked. Grok can also be considerably more crass than chatbots like Gemini and ChatGPT, cursing without much restraint to speak of.

Nonetheless, in February at the AI Seoul Summit, a global gathering of AI leaders and stakeholders, xAI published a draft framework outlining the company’s approach to AI safety. The eight-page document laid out xAI’s safety priorities and philosophy, including the company’s benchmarking protocols and AI model deployment considerations.

As The Midas Project noted in a blog post on Tuesday, however, the draft only applied to unspecified future AI models “not currently in development.” Moreover, it failed to articulate how xAI would identify and implement risk mitigations, a core component of a document the company signed at the AI Seoul Summit.

In the draft, xAI said that it planned to release a revised version of its safety policy “within three months” — by May 10. The deadline came and went without acknowledgement on xAI’s official channels.

Despite Musk’s frequent warnings of the dangers of AI gone unchecked, xAI has a poor AI safety track record. A recent study by SaferAI, a nonprofit aiming to improve the accountability of AI labs, found that xAI ranks poorly among its peers, owing to its “very weak” risk management practices.

That’s not to suggest other AI labs are faring dramatically better. In recent months, xAI rivals including Google and OpenAI have rushed safety testing and have been slow to publish model safety reports (or skipped publishing reports altogether). Some experts have expressed concern that the seeming deprioritization of safety efforts is coming at a time when AI is more capable — and thus potentially dangerous — than ever.



Source link

Abigail Avery

Share
Published by
Abigail Avery

Recent Posts

SEC halts Grayscale’s bid to covert BTC, ETH, XRP, SOL large-cap fund into a spot ETF despite approval order

Key Takeaways The Division of Trading and Markets, acting under delegated authority, approved the rule…

16 minutes ago

Imagiyo AI image generator: Get it for £21.98

TL;DR: Create anything, even NSFW art, with a lifetime subscription to Imagiyo for only £21.98. Digital…

36 minutes ago

Ripple Files for U.S. Banking License for XRP and RLUSD

Ripple is making a serious move into traditional finance. The company behind XRP has applied…

1 hour ago

Hydrow Discount Code: Save Up to $150 in July

Hydrow rowing machines transformed the at-home fitness market when they launched in 2017. With their…

2 hours ago

US Crypto Exchanges a ‘Blind Spot’ in North Korea Laundering Scheme

North Korean developers, operating as fake freelancers, have reportedly amassed over $16.5 million this year…

2 hours ago

BlackRock’s Bitcoin ETF ‘Machine’ Outearns Legendary S&P 500 Fund: Details

The BlackRock iShares Bitcoin Trust (IBIT) has achieved a remarkable milestone by generating more annual…

2 hours ago