Deepfakes, Governance Gaps, and a Risk We Are Not Taking Seriously Enough


Deepfakes are not a future problem. They are already here.
Across different contexts, AI-generated content is being used to defraud people, mislead the public, and damage reputations in ways that are becoming harder to detect. The technology is improving quickly, but the systems meant to guide or regulate its use are not keeping pace. That gap is where real harm is happening.


What concerns me most is not just individual cases, but the pattern behind them. There is a growing pattern of harm without clear accountability, and of communities being affected by tools they do not yet fully understand or have the means to respond to.
To me, that points to a governance gap.


A few things stand out:
First, detection without accountability.
We are beginning to see tools that can detect synthetic media, but there is still very little clarity on responsibility when harm occurs. Is it the platform, the developer of the tool, or the individual who used it? In many cases, there is no clear answer.


Second, asymmetry of access.
It is becoming easier and cheaper to create convincing deepfakes, while detecting them, responding legally, or repairing reputational damage remains difficult and expensive. This imbalance tends to affect those with the least protection the most.


Third, the erosion of trust.
Beyond individual harm, there is a broader risk to how we relate to information itself. If audio and video can no longer be trusted, it affects everything—from public discourse to institutional credibility and collective decision-making.


I think governance has an important role to play here.
Things like clearer human review processes, defined escalation pathways for high-risk cases, and stronger coordination across jurisdictions are not extreme ideas—they are basic structures that need to catch up with the technology.


I am still early in this space, but I am trying to contribute where I can. Through my work with the AI Ethics and Integrity International Association (AIEI), I have been tasked in drafting governance-related documents, including a Human Review and Escalation SOP. I am also continuing to build my understanding of these issues more broadly.


I would be interested to hear how others are thinking about synthetic media governance. What approaches seem most practical right now? And what do you think is still being overlooked?


— Agwu Naomi Nneoma
Associated Participant, Legal Committee | AIEI

1

0
0

Reactions

0
0
Comments1
Sorted by Click to highlight new comments since:

> One thing I am still thinking about is where responsibility should realistically sit when harm from deepfakes occurs.

Is it more effective to focus on platforms, developers, or end users—or does it have to be shared?

Curated and popular this week
Relevant opportunities