Skip to main content

Deepfake Offenders Beware: New Law Lets Victims Sue

A Legal First for an AI-Driven Problem

In a landmark move for digital rights, the U.S. government has passed legislation giving victims of AI-generated explicit deepfakes the right to sue the individuals who make them. The measure is embedded in the 2024 National Defense Authorization Act, making it the first federal law of its kind targeting the creation of nonconsensual pornographic content using artificial intelligence. Previously, victims had limited avenues for recourse under existing laws not designed for the nuances of synthetic media. Advocates are calling this a long-awaited step in acknowledging the psychological and reputational harm caused by these digital forgeries. Lawmakers say it’s a necessary response to the explosive growth of deepfake technology and the increased sophistication with which it mimics real people.

Holding Digital Creators Accountable

This new legal provision empowers victims to file civil lawsuits against the perpetrators behind these deepfakes, potentially including individuals as well as companies. This marks a significant expansion of legal accountability in the AI space. With platforms like Reddit and X (formerly Twitter) frequently grappling with the proliferation of AI-generated explicit images, this law introduces a tool to combat a growing abuse problem often targeting women and public figures. While enforcement challenges remain—especially when creators operate anonymously or overseas—the law sets a precedent for treating deepfake creation as a serious offense rather than simply a technical mishap. It may also pave the way for broader AI regulation in the U.S.

BytesWall

BytesWall brings you smart, byte-sized updates and deep industry insights on AI, automation, tech, and innovation — built for today's tech-driven world.

Related Articles