the process revolves heavily around watermarks, which we’ve seen used to help identify digital art and its creators., but Google says it isn’t a “foolproof” tool that can identify “extreme image manipulation.” The reason it isn’t a foolproof system, though, is because it relies on the use of Google’s own Imagen image generator in order to spot the watermarks that are hidden within the pixels.Watermarks often come in the form of text or a logo added to an image.
See, DeepMind can see the invisible watermark because it’s designed to. This lets it see when an image is created using AI-generated tools like Imagen. So, similar systems could possibly be built into other image-generation tools like Midjourney, allowing for systems to check them for any invisible watermarks.
By making the watermark invisible, Google makes it much more difficult for people to simply crop or edit it out, as it can’t be seen except by AI systems like DeepMind. As such, the AI-generated images become much easier to sort through and point out. Considering all the copyright rules surrounding generative AI right now, like