Cindy Han ’22 Is Working to Protect Artists From Theft by AI
Han’s start-up, Trufo, is building cryptographic watermarks that can’t be separated from images and videos
When she started as an engineer at Netflix, Cindy Han ’22 saw generative artificial intelligence projects, not yet publicly available, that were “beyond my expectations of what we could do at the moment,” she said. Instead of being overwhelmed with excitement, her response was concern that it was too realistic.
“We’re really close to the day where this is going to become a problem, and I didn’t see any of the existing big tech companies or other startups really addressing it in a way that I think will work in the long run,” she said.
When she didn’t find anyone working on a solution, she and Bill Huang ’19 created one: Trufo, a start up that builds cryptographic watermarks to defend against the misappropriation of images and videos fueled by AI.
“Our goal is to help people distinguish what is real and what is fake in a meaningful way in the world of generative AI,” said Han, who is CEO of the company. Not only is being able to distinguish between what’s real and fake important to combat disinformation, but it’s also a way for artists to protect their works from being fed into the mighty craw of AI, or manipulated by it, without the creator’s permission.
Cryptographic watermarks are different from the watermarks most people are used to. They aren’t like the logos that photographers put on their work to protect the images from being stolen or used without permission. Instead, Trufo embeds an invisible signature on the image or video when it’s being created. These watermarks work because they can’t be forged (i.e. taken and put on an AI-created image to try to pass it off as real). They “stick” to the image or video file, so if someone tries to crop the image or add subtitles, the watermark will stay with it. They are also developing on a watermark that works on audio.
Watermark technology hasn’t been improved or innovated upon in decades, Han said: “We’re doing something completely different.”
The Trufo watermark will also show if something has been changed. “Let’s say I have this image of a cat, and I add a crown,” she said. Trufo will show that the crown has been added to that region of the photograph.
As the problem of differentiating between real and AI-generated images and videos is still relatively new, so is Trufo. The start-up is still in its infancy and so far self-funded. But it is on the forefront of the issue: The company became an inaugural member of the U.S. Artificial Intelligence Safety Institute Consortium, a project of the U.S. Department of Commerce’s Nation Institute of Standards and Technology.
Han said she sees the work as a mission in line with Princeton’s core value to “be in the service of humanity.”
“This is a really important problem, and nobody’s solving it in a satisfactory way,” she said. “We want to, within our ability, do something that has a net positive impact.”
She also knows that it can be challenging to see the need for this technology when the problem it fixes isn’t widespread yet. “We’re trying to build a solution for a problem that will likely become a big one,” she said.
0 Responses