Google has taken a major step in the fight against artificial intelligence (AI) abuse and disinformation with the introduction of SynthID Detector. This innovative tool, presented by Google DeepMind, is designed to scan images, audio, video and text for invisible watermarks embedded by Google’s growing suite of AI models.
SynthID Detector aims to provide greater transparency by identifying AI-generated content from Google, including audio AI model NotebookLM, Lyria, and image generation system Imagen. The tool allows for flagging specific parts of content that are likely to contain watermarks, making it easy for users to trace the provenance of media. Just imagine: how useful would it be if you could verify whether an image is authentic or AI-generated with a single click?
But how does it actually work? SynthID uses advanced algorithms to calculate the probability of certain words being generated. Word choices are adjusted to maintain the overall quality and usability of the text. For example, if a text contains more preferred words, SynthID can detect that it is watermarked. This smart approach ensures that the invisible watermarking does not affect the meaning and readability of the content. It is like an invisible imprint system that ensures the authenticity of Google's products.
With the increasing availability of generative AI tools, it’s becoming increasingly difficult for teachers to determine the originality of student work. This is a problem recently highlighted by a New York Magazine report. A technology ethics professor noticed that a student had used the ChatGPT system to write a personal reflection paper. This highlights how AI is challenging artistry and authenticity in education.
Concerns about AI-driven fraud continue to increase. Recently, several professors discovered that students were using AI tools for their introductory essays and course objectives. This raises questions: How do we ensure the integrity of our education in an age where technologies are so accessible?
Awards for AI detection software are widespread, but not without doubts. OpenAI, which once had its own detection software, decided to shut it down in 2023 due to low accuracy. Can you imagine a tool designed to identify AI-written text falling short? This is a challenge not only for students, but also for the system warriors among us who try to harness this technology.
Another much-discussed tool, Cluely, has recently come into the spotlight. Developed by former Columbia University student Roy Lee, it’s designed to evade AI detection software. “The idea that you can have a kind of dual player for your computer is fascinating,” Lee says. But is that really the direction we want to go?
Despite the promise of SynthID and similar technologies, many current AI detection methods continue to struggle to remain reliable. A recent test by Decrypt found that only two of four leading AI detectors—Grammarly, Quillbot, GPTZero, and ZeroGPT—were able to determine whether the US Declaration of Independence was written by humans or AI.
In this dynamic world of technology and creativity, the question is not only how we adopt AI, but also how we manage the risks that come with it. “Above all, it remains exciting to see what the future holds for AI in our daily lives,” and that is something to think about.
What is SynthID Detector?
SynthID Detector is an innovative tool from Google that scans content such as images, audio, video and text for invisible watermarks to identify AI-generated content.
How does watermarking in text work?
The technology adjusts the probability of word choices during text generation, embedding an invisible watermark that does not affect meaning or readability.
What are the consequences for education?
Educational institutions are facing challenges in identifying original student work as more students turn to AI tools to complete their assignments.