C2PA content credentials promise a way to detect deepfakes and other AI-generated content. The risk: new privacy liabilities, identity exposure, and a global system controlled by Big Tech.
 ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­  
Thursday, September 18, 2025
Big Tech’s standard for fighting AI fakes puts privacy on the line


Welcome to Eye on AI, with AI reporter Sharon Goldman. In this edition…a new report says a growing standard for fighting AI fakes puts privacy on the line…Nvidia and Intel announces sweeping partnership to co-develop AI infrastructure and personal computing products…Meta raises its bets on smart glasses with an AI assistant…China’s DeepSeek says its hit model cost just $294,000 to train.

Last week, Google said its new Pixel 10 phones will ship with a feature aimed at one of the biggest questions of the AI era: Can you trust what you see? The devices now support the Coalition for Content Provenance and Authenticity (C2PA), a standard backed by Google and other heavyweights like Adobe, Microsoft, Amazon, OpenAI and Meta. At its core is something called Content Credentials—essentially a digital nutrition label for photos, videos, or audio. The metadata tag, which can’t easily be tampered with, shows who created a piece of media, how it was made, and whether AI played a role. 

Over a year ago, I reported that TikTok would automatically label all realistic AI-generated content created using TikTok Tools with Content Credentials. And the standard was actually founded before the current generative AI boom: The C2PA was founded in February 2021 by a group of technology and media companies to create an open, interoperable standard for digital content provenance, or the origin and history of a piece of content, to build trust in online information.  

But a new report from the World Privacy Forum, a data-privacy nonprofit, warns that this growing push for trust could put privacy on the line. The group argues C2PA is widely misunderstood: it doesn’t detect deepfakes or flag potential copyright infringement. Instead, it’s quietly laying down a new technical layer of media infrastructure—one that generates vast amounts of shareable data about creators and can link to commercial, government, or even biometric identity systems.

Because C2PA is an open framework, its metadata is designed to be replicated, ingested, and analyzed across platforms. That raises thorny questions: Who decides what counts as “trustworthy”? For example, C2PA relies on developing “trust lists” and a compliance program to verify participants. But if small media outlets, indie journalists, or independent creators don’t make the list, their work could be penalized or dismissed. In theory, any creator can apply credentials to their work and apply to C2PA to become a trusted entity. But to get full “trusted status,” the creator often needs to have a recognized certificate authority, meet criteria that are not fully public and navigate verification. According to the the report, this risks sidelining marginalized voices, even as policymakers — including a New York state lawmaker — push for “critical mass” adoption.

But inclusion on these “trust lists” isn’t the only concern. The report also warns that C2PA’s openness also cuts the other way: the framework can be too easy to manipulate, since so much depends on the discretion of whoever attaches the credentials—and there’s little to stop bad actors from applying them in misleading ways.

“A lot of people think, oh, this is a content labeling system, they’re not necessarily cognizant of all of the layers of identifiable information that might be baked in here,” said Kate Kaye, deputy director of the World Privacy Forum and co-author of the report. She emphasized that C2PA isn’t just a simple label on a piece of media — it creates a stream of data that can be ingested, stored, and linked to identity information across countless systems.

All of this matters for both corporate entities and consumers. For example, Kaye stressed that businesses might not realize that C2PA falls into privacy and data governance and requires policies around how it’s collected, shared, and secured. Also, researchers have already shown it’s possible to cryptographically sign forged images. So while companies may embrace C2PA to gain credibility — they also assume new obligations, potential liabilities, and dependence on a trust system controlled by Big Tech players.

For consumers, there are definitely privacy and identity exposure issues. C2PA metadata can include timestamps, geolocation, details on editing, and even connections to identity systems (including government IDs), but consumers may have little control or awareness that this is being captured. It’s technically opt-in—but if you don’t opt in, your content could be marked less trustworthy. And in the case of TikTok, for example, users are automatically opted in (other platforms like Meta and Adobe are adopting C2PA, but generally as opt-in for creators).

Overall, there are a lot of power dynamics at play, Kaye said. “Who is trusted and who isn’t and who decides – that’s a big, open-ended thing right now.” But the burden to figure it out isn’t on consumers, she emphasized: Instead, it’s on businesses and organizations to think carefully about how they implement C2PA, with appropriate risk assessments.

With that, here’s the rest of the AI news.

Sharon Goldman
sharon.goldman@fortune.com
@sharongoldman

Advertisement
FORTUNE ON AI
AI IN THE NEWS

Nvidia and Intel announces sweeping partnership to co-develop AI infrastructure and personal computing products. The deal, which includes Nvidia taking a $5 billion stake in Intel, brings together two longtime rivals at a moment when demand for AI computing is exploding. “This historic collaboration tightly couples NVIDIA’s AI and accelerated computing stack with Intel’s CPUs and the vast x86 ecosystem — a fusion of two world-class platforms," Nvidia CEO Jensen Huang said. “Together, we will expand our ecosystems and lay the foundation for the next era of computing.”

Meta raises its bets on smart glasses with an AI assistant. According to the New York Times, Meta is doubling down on smart glasses after selling millions since their debut four years ago. At its annual developer conference this week, the company unveiled three new models — including the $799 Meta Ray-Ban Display, which features a tiny screen in the lens, app controls via a wristband, and a built-in AI voice assistant. Meta also introduced an upgraded Ray-Ban model and a sport version made with Oakley. But the rollout wasn’t flawless: onstage, Mark Zuckerberg’s demo faltered when the glasses failed to deliver a recipe and place a call.

China's DeepSeek says its hit model cost just $294,000 to train. Reuters reported today that Chinese AI startup DeepSeek is back in the spotlight after months of relative quiet, with new details on how it trained its reasoning-focused R1 model. A recent Nature article co-authored by founder Liang Wenfeng revealed the system cost just $294,000 to train using 512 of Nvidia’s China-only H800 chips — a striking contrast with U.S. firms like OpenAI, whose training runs cost well over $100 million. But questions remain: U.S. officials said that DeepSeek has had access to large volumes of restricted H100 chips, despite export controls, and the company has now formally acknowleged it also used older A100s in early development. The revelations may reignite debate over AI "scaling laws" and whether massive clusters of the most advanced AI chips are really necessary to train cutting-edge AI models. It also highlights ongoing geopolitical tensions over access to Nvidia's chips. 

Advertisement
AI CALENDAR

Oct. 6-10: World AI Week, Amsterdam

Oct. 21-22: TedAI San Francisco. Apply to attend here.

Nov. 10-13: Web Summit, Lisbon. 

Nov. 26-27: World AI Congress, London.

Dec. 2-7: NeurIPS, San Diego

Dec. 8-9: Fortune Brainstorm AI San Francisco. Apply to attend here.

EYE ON AI NUMBERS

50% 
Half of Americans are now more worried than excited about AI