In the age of AI, Photographs Need Proof
I’ve noticed something changing in the status of the photograph itself.
I look at my Canon R5 a little differently now.
It is still the same camera. It still does what I need it to do. It still produces files I trust, because I made them. But in this new climate, trust is no longer only about the photographer. Increasingly, it is about the file’s ability to prove where it came from.
That is the shift.
For a long time, photography lived on a kind of silent assumption: that something had been there. You could crop, edit, interpret, even mislead — but the image still carried a basic link to the world.
Now that link is no longer visible enough on its own.
That is why all this movement around C2PA and image authenticity matters.
In simple terms, it is a technical standard meant to preserve an image’s provenance: who made it, where it came from, and whether it was altered along the way.
Not because most viewers know what the acronym means.
Not because metadata suddenly became exciting.
But because the industry has understood something important: in the age of generative AI, the image alone is no longer enough.
Leica was the first to make that visible in a camera people could actually buy. Sony pushed it into the news workflow. Canon is there too now, with support on its newer pro bodies, even if the first target is clearly agencies and professional use rather than ordinary photographers.
That detail matters.
This is not being built first for people like me posting photographs online.
It is being built first for the places where proof matters most: newsrooms, agencies, documentary workflows, public trust.
And that tells you everything.
The answer to the crisis is no longer just ethical. It is industrial.
Cameras are beginning to sign files.
Software is beginning to preserve provenance.
Verification is becoming part of the chain.
That will not solve every problem. It will not make photographs better. It will not make people more thoughtful. And it will only work if the whole chain agrees to keep that information alive.
But it does change the default position.
For most of photography’s history, a photograph was treated as real unless there was reason to doubt it.
We may be entering the opposite era.
Soon, an image without provenance may start to feel incomplete. Not because it is fake. But because it cannot prove that it isn’t.
In the AI era, authenticity may no longer be something we see.
It may be something we verify.
Further reading
- Chelsea Northrup on image authenticity
- C2PA — Coalition for Content Provenance and Authenticity
- Leica M11-P and Content Credentials
- Sony Camera Authenticity Solution
- Canon firmware support for C2PA on EOS R1 and R5 Mark II


Philippe, well said! It's a shame we have to stop and think about this rather than simply enjoying the photo. I think one of the reasons people submit AI images as photos is there is no other place to put them. While AI images may be art they are not photographs. This is the same as photo-realistic drawing; just because they look like photo, it doesn't make them photo. Likewise, using a "painterly" filter on a photo doesn't make it a painting. I wish sites like 1x, Vero, Glass, IG would have separate "channels"; one for photos and one for AI images so people have somewhere to post their AI images and we as viewers know what we are seeing. Realistically, this wouldn't completely resolve the issue since some people will post whatever they can where every they can but it could help a lot.
ReplyDeleteThanks, Mike. I agree completely. AI images may be art, but they are not photographs. And yes, separate channels would help a lot. Not a perfect solution, but at least a clearer one. The difficulty now is that doubt comes easily, while proof is much harder. That is exactly why provenance matters.
Delete