BTCC / BTCC Square / Cryptopolitan /
Scrutiny Intensifies Over Google’s AI Safety Commitments Following Gemini Disclosure

Scrutiny Intensifies Over Google’s AI Safety Commitments Following Gemini Disclosure

Published:
2025-04-17 22:43:32
14
1

Google’s AI safety promises under scrutiny after Gemini report

Recent revelations in the Gemini report have prompted heightened examination of Google’s assurances regarding artificial intelligence safety protocols. Industry analysts and stakeholders are now critically assessing whether the tech giant’s ethical AI frameworks and operational safeguards align with its public pledges. The disclosure has sparked debates across regulatory circles and academic communities about transparency, accountability, and the implementation of responsible AI development practices in large language models and other advanced systems.

Google handles safety reporting differently.

Google releases a report only after a model is no longer tagged “experimental,” and it moves certain “dangerous capability” findings into a separate audit that is not published at once. As a result, the public paper does not cover every threat Google has tested for.

Several analysts said the new Gemini 2.5 Pro document is a stark case of limited disclosure. They also noticed that the report never refers to Google’s Frontier Safety Framework, or FSF, a policy the company announced last year to spot future AI powers that could cause “severe harm.”

“This report is very sparse, contains minimal information, and arrived weeks after the model went public,” said Peter Wildeford, co‑founder of the Institute for AI Policy and Strategy. “It is impossible to confirm whether Google is meeting its own promises, and therefore impossible to judge the safety and security of its models.”

Thomas Woodside, co‑founder of the Secure AI Project, said he was glad any paper had appeared at all, yet he doubted Google’s plan to release steady follow‑ups. He pointed out that the last time the firm shared results from dangerous‑capability tests was June 2024, and that paper covered a model announced in February of the same year.

Confidence slipped further when observers saw no safety paper for Gemini 2.5 Flash, a slimmer and faster model Google revealed last week. A company spokesperson said a Flash paper is “coming soon.”

“I hope this is a real promise to start giving more frequent updates,” Woodside said. “Those updates should include results for models that have not yet reached the public, because those models may also pose serious risks.”

Google now falls short on transparency

Meta’s safety note for its new Llama 4 models runs only a few pages, while OpenAI chose not to publish any report at all for its GPT‑4.1 series.

The shortage of detail comes at a tense time. Two years ago, Google told the U.S. government it would post safety papers for every “significant” AI model within scope.” The company made similar pledges to officials in other countries, saying it would offer “public transparency” about its AI products.

Kevin Bankston, senior adviser on AI governance at the Center for Democracy and Technology, called the releases from leading labs a “race to the bottom” on safety.

“Combined with reports that rival labs like OpenAI have cut safety‑testing time before release from months to days, this meager documentation for Google’s top model tells a troubling story of a race to the bottom on AI safety and transparency as companies rush their models to market,” he added.

Google says much of its safety work happens behind closed doors. The company states that every model undergoes strict tests, including “adversarial red teaming,” before any public launch.

 

Cryptopolitan Academy: Tired of market swings? Learn how DeFi can help you build steady passive income. Register Now

|Square

Get the BTCC app to start your crypto journey

Get started today Scan to join our 100M+ users