Deepfakes Are No Longer Someone Else's Problem

·13PrivacyIssue Analysis

Deepfakes Are No Longer Someone Else's Problem

On February 23, 61 data protection authorities from 52 countries simultaneously signed a single declaration. It was the "Joint Statement on AI-Generated Content and the Protection of Personal Information," adopted by the Global Privacy Assembly (GPA). Korea's Personal Information Protection Commission (PIPC) was among the signatories.

When 52 countries move at the same time, it means the situation is urgent. And the direct trigger for this declaration was xAI's Grok.

What Grok Set Off

In late December 2025, Grok's image editing feature was integrated with X (formerly Twitter). Musk personally promoted it, telling people to "give it a try." Problems erupted almost immediately. Users started uploading photos of real people and entering prompts to undress them.

Over 9 days, Grok generated approximately 4.4 million images on X. Reports emerged that nearly half of those depicted women in sexually explicit content. Images targeting children were included. Grok earned the nickname "xxxAI."

xAI eventually restricted the feature to paid subscribers only, but the EU criticized this as not a fundamental solution, and the UK signaled tighter regulation ahead. Indonesia became the first country in the world to block the Grok service entirely. California prosecutors and attorneys general from 35 U.S. states launched investigations, and House Democrats sent a letter to Musk.

This situation is the direct backdrop to the GPA joint declaration.

Korea Had Already Been Through This

In Korea, deepfake sex crimes aren't a new story. Long before Grok, they had been happening in a far more organized fashion.

In August 2024, large-scale Telegram-based deepfake sex crimes came to light. It started when MBC reported on the Inha University case. In a Telegram chat room with over 1,200 participants, sexually exploitative material had been produced and distributed for years using AI-synthesized faces of female students. When victims discovered what was happening, the perpetrators mocked and threatened them instead.

This wasn't limited to one university. A list of "deepfake victim schools" shared on social media included over 500 middle schools, high schools, and universities nationwide. Of 781 victim support requests, 37% involved minors. Deepfake chat rooms targeting female military personnel were also discovered, and a room distributing sexually exploitative material of family members had over 1,900 participants.

The methods were sophisticated. Under the name "겹지인방" (overlapping acquaintance rooms), participants shared information about women they mutually knew and collected selfies from social media to synthesize with AI. A defining characteristic of these crimes is that as the technological barrier has dropped, the proportion of teenage perpetrators has become overwhelming.

According to the Korean National Police Agency's 2025 crackdown results, 1,827 deepfake sex crime cases occurred over one year, resulting in 1,438 arrests. Of the suspects, 61.8% were teenagers and 30.2% were in their twenties. That means 9 out of 10 suspects were in their teens or twenties.

The Declaration's Four Principles

Looking at the four principles in the GPA joint declaration, they're aimed squarely at the problems exposed by both the Grok incident and Korea's Telegram deepfake cases.

Safeguards: Technical and administrative measures must be implemented to prevent misuse of personal data and non-consensual generation of sexual content. In Grok's case, the product launched with only minimal guardrails and the damage followed immediately. It's a textbook example of what happens when an AI service ships without proper safeguards.

Transparency: Information about what an AI system can and cannot do must be provided transparently. Users should be able to know what they can and cannot create, and whether generated content was produced by AI.

Redress procedures: Effective mechanisms for swift reporting and removal must be established. One of the biggest problems in the Telegram cases was the difficulty of deletion and investigation. Overseas servers, encrypted communications, destruction of evidence — victims couldn't even find out how far their images had spread.

Protection of children and young people: Age-appropriate information and enhanced protective measures must be implemented. The fact that 37% of victims in Korea's deepfake crimes were minors, and 62% of perpetrators were teenagers, explains why this principle had to be stated separately.

From a Consultant's Perspective

In privacy consulting, most companies think "that would never happen on our service." There's a widespread tendency to view the deepfake problem as an issue only for platform operators or AI developers.

But from an ISMS-P certification standpoint, if a company operates a service with AI capabilities, every one of these principles could become an audit checkpoint. If there's AI that processes user-uploaded images, safeguards are needed. There should be disclosure procedures for AI-generated content. And a system for reporting and removing harmful content must be in place.

This joint declaration isn't binding regulation yet. But 52 countries setting a direction simultaneously is a signal that national legislation will follow this path. Korea already expanded its laws in 2024 by amending the Sexual Violence Punishment Act to criminalize mere possession and viewing of deepfake content, and broadened the scope of undercover investigations to include adult victims.

For companies, the question to ask right now is: "Could the AI features in our service be misused?" As Grok demonstrated, waiting to respond until after an incident means facing the worst-case outcome — global regulatory investigations and service bans.


Sources