This was the question we were chewing over at Gerbaud’s legendary Budapest patisserie yesterday. I was with two of the most experienced and creative thinkers about global media: Rosental Alves of the University of Texas, who knows everyone and everything about Latin America journalism, and Behrouz Afagh, head of the BBC’s Asia and Pacific news coverage. The day before, we had all heard a terrific idea from Fred Ritchin, the USA dean of news photographers, now a New York University photography professor because he left as photo head of the New York Times in 1982.
Here is Fred’s idea: when you digitally publish a serious news photo you imbed a link on the left corner of the image, that when moused over, shows a “before” image taken of the same subject just before the selected image, and in the right top corner, a link that shows an “after” shot taken just after the main image. It’s a way of seeing whether the selected image was constructed or was actually taken–as authentic photojournalism is supposed to be– from a real flow of action. This process itself could be faked, of course, as can almost everything now. But there would be a low incentive to fake these contextualizing “before” and “after” shots, since the point is that they would be voluntarily included by those who are trying to hold themselves accountable to a professional standard of veracity.
This kind of device, which unfortunately didn’t capture the imagination of our colleague Ethan Zuckerman at the MIT Media lab when we posed it to him as something we would like to see his designers create, is related to my own long-fantasized authentication tool. I would like a tool that would enable those who want to be held to a veracity standard (a “Good Housekeeping Seal of Approval.”) The creators and spreaders of information would imbed a visible bug in any image, video selection or piece of text, that would carry its provenance. Where did this “fact,” image or story originate? We might also create a function to track where has it been since? It would automatically create a history like those we can access now for any given Wikipedia entry. The content that would be “bugged” would have to be fixed, like a pdf, which would make it difficult to remix or tweak. But that’s exactly the point. Its the raw material of fact, before it gets thrown into the great mixmaster of the web.
Of course, those doing risky communication would forgo using this history bug, in order not to be tracked down. This is sort of the opposite of Tor. (All tools can be used for evil. That doesn’t mean we should shy away from creating new tools.)
For those who are working hard to offer or find verified, authenticated facts and images, this little bug could offer a missing accountability factor. If people understand where a story (or image) comes from, they might know more about what credibility to give it. It could be introduced on a voluntary basis by the purveyors whose vetting is considered essential to their brand (New York Times, BBC.) This wouldn’t solve all the problems of critically evaluating the flow of content, but it would give us a tool we could sorely use.
If you add another feature—an automated or nonautomated micropayment feature that is tied to the authentification—then you might just have an interesting tool that would not only help people figure out what to take seriously, but would pay for this higher veracity stuff, supporting the often expensive production of investigative journalism and other hard-to-get vetted and contextualized news items.
Anyone like/hate these ideas? Your feedback is awaited.