News, Technology

Google DeepMind unveils open-source tool for identifying AI-generated text

Written by Tech Desk ·  1 min read >

Google DeepMind has launched a new tool, SynthID, that is designed to identify AI-written text and has made it available as an open-source project. This tool is one of the numerous watermarking tools that are currently under development to enhance the transparency of generative AI outputs. SynthID has made a significant stride in this initiative by introducing a watermark for images last year and subsequently for AI-generated videos.

SynthID has been integrated into Google’s Gemini application and online chatbots. It is currently accessible through Hugging Face, a website that hosts AI datasets and models. Watermarks are a critical component that enables users to discern AI-generated content, which plays a vital role in combating misinformation.

SynthID Features

SynthID features invisible watermarking that can be incorporated into the text generated by AI models, without altering the text in any manner. The tool employs statistical analysis to alter the probability of token generation to identify AI-generated text. The investigations conducted in this research have shown that SynthID does not compromise the quality, precision, creativity, and speed of the produced text.

Additionally, SynthID is available on Hugging Face for developers to integrate into their models and is licensed under open-source. The watermarking method’s stability and efficacy can be evaluated through the community testing feature.Currently, SynthID is designed to work exclusively with Google’s AI models, with hopes for future compatibility across various platforms.

SynthID Limitations

However, the watermark does have limitations; it is less effective when there are numerous modifications, such as rewriting or translation, and as a result, it is not particularly effective when conducting factual inquiries.

While SynthID is a step forward in AI transparency, experts emphasize that watermarking should be viewed as one component of a broader strategy for ensuring safe AI usage. The development of additional safeguards will be essential as the technology continues to evolve.