Skip to main content

Google Upgrades SynthID to Sniff Out AI Fakes by Sight, Sound, and Script

One Tool, Three Mediums

Google DeepMind has significantly expanded its AI detection capabilities with the new version of SynthID. Originally released in 2023 as a tool to watermark AI-generated images from its Imagen model, SynthID now boasts the ability to detect synthetic media across images, audio, and text. The update is designed to help content creators, platforms, and fact-checkers determine whether content was generated by AI, a critical step as misinformation and deepfakes become more elusive and convincing. The latest iteration of SynthID introduces detection features for text generated by models like Gemini and other large language models (LLMs), broadening its application beyond visuals to include written and spoken content.

Transparency Meets Security

DeepMind’s goal with the SynthID upgrade is to strike a balance between identifying AI content and maintaining privacy and security. The tool works without requiring internet access or sending user data to the cloud, a key advantage for enterprise and government applications. Instead of relying on fragile digital watermarks prone to being removed, SynthID uses imperceptible changes and sophisticated algorithms to embed signals within the output itself, surviving compression and modification. While Google acknowledges that no detection system is flawless, SynthID represents a major leap towards scalable, responsible AI deployment, amid a global surge in generative AI usage.

BytesWall

BytesWall brings you smart, byte-sized updates and deep industry insights on AI, automation, tech, and innovation — built for today's tech-driven world.

Related Articles