But if you happened to encounter the fake photo on Facebook, where it was repeatedly presented as real, and if you happened to object to NFL players like Bennett protesting during the national anthem, then you might have been inclined to believe what you saw. You might even have been inclined to write a comment like, “Shut down the NFL. Send them all overseas to see how much better their life will be,” as one Facebook user wrote just last week, nearly a year after the photo began circulating and despite thousands of other comments identifying it as fake.
Doctored images are the scourge of the web-wide fight against fake news. Tech companies and researchers can analyze the behavior of a typical bot in order to sniff out new ones. They can limit the reach of news outlets that perpetually share stories flagged as false. They can see when accounts are coordinating their activity and wipe out whole networks at once. But determining whether a photo that’s been meme-ified and screenshotted a thousand times over depicts something real requires a different level of forensic analysis. Researchers are beginning to develop software that can detect altered images, but they’re locked in an arms race with increasingly skillful creators of fake images.
As memes have become the language of the internet, they’ve also become a key vehicle for misinformation. Fact-checking organizations dutifully work to debunk images like the flag-burning photo, but finding those fact-checks remains the responsibility of users, who are already busy scrolling through their phones, liking and sharing as they go. And rarely are those level-headed analyses as widely shared as the original misinformation.
What we really need, says Ash Bhat, is a tool that proactively tells people when their media diet has become infected with misinformation, at the very moment they’re seeing it. So Bhat and his business partner, Rohan Phadte, both UC Berkeley undergrads, came up with a browser plug-in that does just that. Called SurfSafe, the plug-in, which launches today, allows people to hover over any image that appears in their browser, whether that’s on Facebook or a site like WIRED. SurfSafe instantly checks that photo against more than 100 trusted news sites and fact-checking sites like Snopes to see whether it’s appeared there before. The photo of Bennett burning the flag, for instance, would surface nine other articles where the image appeared, including fact checks from Snopes and Time.com.
“We want SurfSafe to become a solution that’s analogous to anti-virus software,” Bhat says. “We want to scan your news feed for fake news as you browse.”
The concept for SurfSafe grew out of an earlier tool called BotCheck.me, developed by Bhat and Phadte’s startup, RoBhat Labs. It was also a browser extension that added a button to every Tweet and Twitter profile, which users could click to check whether that account likely belonged to a bot. Bhat and Phadte used machine learning to analyze the difference between typical bot behavior and human behavior on Twitter and developed a model that they said could predict bots with over 93.5 percent accuracy.
Over the course of that work, the two students realized not only how much photo-based content these bots were sharing, but also just how difficult it was to vet. That’s a challenge afflicting both researchers and platforms, says Onur Varol, a postdoctoral researcher at Northeastern University’s Center for Complex Network Research, who has helped build a competitor to BotCheck.me called Botometer. “Image fakery or trying to create misleading information in photos is a much deeper problem,” says Varol. “It’s a really difficult task even for journalists to validate if they’re fake or real.”
That’s especially true, Varol says, when the image itself is real but is presented online in an entirely different context. A photo from one protest, for instance, might turn up in a story about another, misleading the viewer about what really happened.
SurfSafe isn’t a perfect solution, but it’s certainly an ambitious start. It stores a unique digital fingerprint for every photo on more than 100 news sites that SurfSafe considers trusted, including outlets like NYTimes.com, CNN.com, and FoxNews.com. It also saves a signature of every photo its users see while they’re browsing the internet with the plug-in installed. “One user can see hundreds of thousands of images per day, just with basic browsing habits,” Phadte says. Photos that are similar but doctored will have fingerprints, or “hashes,” that are almost, but not precisely, the same. “If an image is Photoshopped, only part of the image hash is different, so ultimately, we can tell that these images are pretty similar,” Phadte says.
When a user hovers over a photo, SurfSafe scans the entire database of fingerprints to see if it’s ever encountered that image before in its raw or doctored form. If it has, it instantly surfaces the other images on the right side of the screen, prioritizing the earliest instance of the image, as it’s most likely to be the original. Users then have the ability to flag the image as either propaganda, Photoshopped, or misleading, which helps inform the SurfSafe model going forward.
Bhat acknowledges the tool has some blind spots. If SurfSafe has never encountered an image before, for instance, the user will simply see that there are no matches, even if that image is, in fact, fake. But Bhat views that as a minor flaw. “The fake news we care about is the fake news that’s spreading virally,” he says. “If a piece of fake news is spreading, we’ll have seen it.”
The more people who use SurfSafe, the more images the tool will ingest. If SurfSafe can get a few hundred thousand users in its first year, Bhat says he expects to have a database of 100 billion fingerprints.
Varol views this as a valuable starting point because it saves people—professional fact-checkers included—a step. “This tool might capture the easy aspects of fact-checking, so you don’t have to go through the image and do your own background check,” he says.
Still, there are limitations that remain out of Bhat and Phadte’s control, the biggest of which is getting people to install the plug-in in the first place. After all, it’s partly a lack of digital literacy that makes people vulnerable to fake news. It’s a bit of a leap to expect someone whose main window to the internet is Facebook to take the additional step of installing a fact-checking plug-in. Another challenge is the fact that right now, the plug-in is only available on Chrome, Firefox, and Opera web browsers. That means SurfSafe can’t flag content people find on their phones when they’re inside an app, like Facebook. RoBhat Labs is working on a mobile version of the tool.
The simplest way to ensure mass adoption of a tool like this would be for platforms like Facebook and Twitter to integrate this technology themselves. Facebook has started a version of this for news articles. When fact-checking organizations flag a news story as false, Facebook diminishes the story’s reach and surfaces related articles debunking the original story right underneath it. The company recently began expanding that feature to photos and videos. For now, however, much of that work begins manually, with human fact-checkers vetting the content. Automating that process, as SurfSafe is attempting to do, comes with the risk of getting it wrong. “Companies are trying to be more careful about when they’re deploying such systems to clean their platforms,” Varol says. “Making one mistake will cost them a lot more than software developed by a university.”
That underscores the stakes of what RoBhat Labs has set out to accomplish. When your aim is to rid the internet of misinformation, the last thing you want to do is create even more.
0 Comments