WSJ Sees Deepfakes as Existential Threat
I came across this gem this morning:
The Wall Street Journal is recognizing the the so-called "deepfakes" are a cause for concern. In case you missed it, I wrote a bit about this topic here:
https://steemit.com/news/@nealmcspadden/trust-in-a-trustless-society
People are starting to recognize that this is an existential threat to the way information is conveyed in today's world.
Then there is this section:
How can you detect deepfakes
We’re working on solutions and testing new tools that can help detect or prevent forged media. Across the industry, news organizations can consider multiple approaches to help authenticate media if they suspect alterations.
“There are technical ways to check if the footage has been altered, such as going through it frame by frame in a video editing program to look for any unnatural shapes and added elements, or doing a reverse image search,” said Natalia V. Osipova, a senior video journalist at the Journal. But the best option is often traditional reporting: “Reach out to the source and the subject directly, and use your editorial judgment.”
Which sort of begs the question, will this be done for every video source? No, not in the short-term. Eventually, some sort of database could be built that would have the video equivalent of error checking that could be compared automatically and fakes identified without human effort. But I think that's a ways off for now.
But I like that last line best. "Reach out to the source," you know, like doing actual reporting.
Even if reporters did start to do their jobs though, that only affects official news publications. The potential for mayhem by deepfakes on social networks is enormous.