Exposing ‘Deepfake Geography’

SEATTLE, Washington, April 22, 2021 (ENS) – A fire in New York’s Central Park seems to appear as a smoke plume and a line of flames in a satellite image. Colorful lights on Diwali night in India, seen from space, seem to show fireworks exploding. But they are false images that demonstrate “location spoofing.”

The two photos – created by different people, for different purposes – are fake but they look like genuine images of real places. And with the more sophisticated artificial intelligence, AI, technologies available today, researchers warn that such “deepfake geography” could become a growing problem.

“This isn’t just Photoshopping things. It’s making data look uncannily realistic,” said Bo Zhao, assistant professor of geography at the University of Washington, UW, and lead author of the study, which was published April 21 in the journal “Cartography and Geographic Information Science.”

“The techniques are already there,” Zhao said. “We’re just trying to expose the possibility of using the same techniques, and of the need to develop a coping strategy for it.”

The threat is very real. In 2019, director of the National Geospatial Intelligence Agency, the organization that supplies maps and analyzes satellite images for the U.S. Department of Defense, “implied that AI-manipulated satellite images can be a severe national security threat,” said Zhao.

U.S. officials confirmed that data integrity is a rising concern to the “Defense One” news outlet in March 2019. “It’s something we care about in terms of protecting our data because if you can get to the data you can do the poisoning, the corruption, the deceiving and the denials and all of those other things,” said Lt. Gen. Jack Shanahan, who runs the Pentagon’s new Joint Artificial Intelligence Center. “We have a strong program protection plan to protect the data. If you get to the data, you can get to the model.”

A residential street in the city of Tacoma, Washington overlooking Puget Sound April 3, 2021 (Photo by David Seibold)

Inspired by these and other warnings, a team of researchers set out to identify new ways of detecting fake satellite photos. They, too, want to warn of the dangers of falsified geospatial data and they are calling for a system of geographic fact-checking.

Co-authors on the study are Yifan Sun, a graduate student in the UW Department of Geography; Shaozeng Zhang and Chunxue Xu of Oregon State University; and Chengbin Deng of Binghamton University in New York.

As Zhao and his co-authors point out, fake locations and other inaccuracies have been part of mapmaking since ancient times, due in part to the very nature of translating real-life locations to map form, as no map can capture a place exactly as it is.

But some inaccuracies in maps are spoofs created by the mapmakers. The term “paper towns” describes discreetly placed fake cities, mountains, rivers or other features on a map to prevent copyright infringement.

Today, with the prevalence of geographic information systems, Google Earth and other satellite imaging systems, location spoofing involves far greater sophistication, researchers say, and carries with it more risks.

To study how satellite images can be faked, Zhao and his team turned to an AI framework that has been used in manipulating other types of digital files.

When applied to the field of mapping, the algorithm essentially learns the characteristics of satellite images from an urban area, then generates a deepfake image by feeding the characteristics of the learned satellite image characteristics onto a different base map — similar to how popular image filters can map the features of a human face onto a cat.

Next, the researchers combined maps and satellite images from three cities – Tacoma, Seattle and Beijing – to compare features and create new images of one city, drawn from the characteristics of the other two. They designated Tacoma their “base map” city and then explored how geographic features and urban structures of Seattle, which is similar to Tacoma in topography and land use, and Beijing, which is different, could be incorporated to produce deepfake images of Tacoma.

The untrained eye may have difficulty detecting the differences between real and fake, the researchers point out. A casual viewer might attribute the colors and shadows to poor image quality. To try to identify a “fake,” researchers homed in on more technical aspects of image processing, such as color histograms and frequency and spatial domains.

Some simulated satellite imagery can serve a purpose, Zhao said, especially when representing geographic areas over periods of time to, say, understand urban sprawl or climate change. There may be a location for which there are no images for a certain period of time in the past, or in forecasting the future, so creating new images based on existing ones – and identifying them as simulations – could fill in the gaps and help provide perspective.

The study’s goal was not to show that geospatial data can be falsified, Zhao said. Rather, the authors hope to learn how to detect fake images so that geographers can begin to develop data literacy tools, similar to today’s fact-checking services, for public benefit.

Featured image: What may appear to be an image of the city of Tacoma, Washington is, in fact, a simulated one, created by transferring visual patterns of Beijing onto a map of a real Tacoma neighborhood. 2020 (Image courtesy Zhou et al.)

Continue Reading