Cornell researchers develop light-based watermarking to spot fake videos
Cornell University scientists have developed a way to detect altered or fabricated videos by embedding invisible watermarks into lighting, Kazinform News Agency correspondent reports.

The watermarks are tiny changes in brightness that people cannot see but that are still recorded in any video taken under that light. By embedding the code directly into the lighting, the method ensures that any authentic video of the subject contains the hidden watermark, no matter who films it.
The researchers explained that programmable light sources, such as computer screens, photography lamps, and certain types of room lighting, can be coded with software. Older lamps can also be adapted by attaching a computer chip about the size of a postage stamp, which adjusts brightness according to the code.
“Video used to be treated as a source of truth, but that’s no longer an assumption we can make,” said Abe Davis, assistant professor of computer science at Cornell, who first conceived the idea. “Now you can pretty much create a video of whatever you want. That can be fun but also problematic, because it’s only getting harder to tell what’s real.”
Each light carries a unique code that creates a hidden, low-quality version of the original video. If the footage is changed, by removing parts or adding AI-generated elements, for example, mismatches between the original video and the hidden “code video” expose the manipulation.
“Each watermark carries a low-fidelity time-stamped version of the unmanipulated video under slightly different lighting. We call these code videos. When someone manipulates a video, the manipulated parts start to contradict what we see in these code videos, which lets us see where changes were made. And if someone tries to generate fake video with AI, the resulting code videos just look like random variations,” Davis said.
The team also found that several codes can run at once, creating more complex patterns that are harder to fake. Even if someone knew the technique, reproducing every hidden code across multiple versions would be far more difficult.
As Davis explained, “Instead of faking the light for just one video, they have to fake each code video separately, and all those fakes have to agree with each other.”
The approach has worked well in lab tests, both indoors and in some outdoor settings. Still, Davis noted that this is only part of the broader fight against misinformation: “This is an important ongoing problem. It’s not going to go away, and in fact, it’s only going to get harder.”
Earlier, Kazinform reported that researchers have begun embedding hidden commands in their scientific papers to sway AI peer review systems.