Google is making SynthID Text, its technology that lets developers watermark and detect text written by generative AI models, generally available.
SynthID Text can be downloaded from the AI platform Hugging Face and Google’s updated Responsible GenAI Toolkit.
“We’re open-sourcing our SynthID Text watermarking tool,” the company wrote in a post on X. “Available freely to developers and businesses, it will help them identify their AI-generated content.”
So how does it work?
Given a prompt like “What’s your favorite fruit?,” text-generating models predict which “token” most likely follows another — one token at a time. Tokens, which can be a single character or word, are the building blocks a generative model uses to process information. A model assigns each possible token a score, which is the percentage chance it’s included in the output text. SynthID Text inserts additional info in this token distribution by “modulating the likelihood of tokens being generated,” Google says.
“The final pattern of scores for both the model’s word choices combined with the adjusted probability scores are considered the watermark,” the company wrote in a blog post. “This pattern of scores is compared with the expected pattern of scores for watermarked and unwatermarked text, helping SynthID detect if an AI tool generated the text or if it might come from other sources.”
Google claims that SynthID Text, which has been integrated with its Gemini models since this spring, doesn’t compromise the quality, accuracy, or speed of text generation, and works even on text that’s been cropped, paraphrased, or modified.
But the company also admits that its watermarking approach has limitations.
For example, SynthID Text doesn’t perform as well with short text or text that’s been rewritten or translated from another language, or with responses to factual questions. “On responses to factual prompts, there are fewer opportunities to adjust the token distribution without affecting the factual accuracy,” explains the company. “This includes prompts like ‘What is the capital of France?,’ or queries where little or no variation is expected, like ‘recite a William Wordsworth poem.’”
Google isn’t the only company working on AI text watermarking tech. OpenAI has for years researched watermarking methods, but delayed their release over technical and commercial considerations.
Watermarking techniques, if widely adopted, could help turn the tide on inaccurate — but increasingly popular — “AI detectors” that falsely flag essays written in a more generic voice. But the question is, will they be widely adopted — and will one standard or tech win out over others?
There may soon be legal mechanisms that force developers’ hands. China’s government has introduced mandatory watermarking of AI-generated content, and the state of California is looking to do the same.
There’s urgency to the situation. According to a report by the European Union Law Enforcement Agency, 90% of online content could be synthetically generated by 2026, leading to new law enforcement challenges around disinformation, propaganda, fraud, and deception. Already, nearly 60% of all sentences on the web could be AI-generated, per one study, thanks to the widespread use of AI translators.