Spotting fake images online: Tips from experts and why its important

Published 2024-10-22 06:00

Look for flaws and things that look odd


⭐️HERE’S WHAT YOU NEED TO KNOW⭐️

  • With artificial intelligence (AI), fake images have never been easier to make. 
  • People can use fake images to manipulate other people.
  • This is dangerous because it can cause people to lose trust in what they see, which is important for democracy.
  • Experts say we need to be more skeptical of what we see and ask questions.
  • Read on to find out what tips they shared. ⬇️⬇️⬇️

Just a few weeks ago, after Hurricane Helene hit parts of Florida, an image surfaced on social media that got a lot of people emotional.

The image depicted a little girl in a boat holding a puppy, crying, presumably while her home was being flooded. 

Or rather, that’s what it appeared to be.

Turns out, the image was made by artificial intelligence. It’s not clear why it was made, but images like these are becoming more common as AI tools become more widely accessible. 

Experts say fake images can be dangerous because they can manipulate large groups of people, and that kids need to start developing an eye for spotting them.

How can we spot fake images?

In order to fight back against the dangers of fake images, experts say we need to make a habit of questioning what we see online. 

“We can no longer be on autopilot on our phones and mindlessly consuming content,” said Craig Silverman, a journalist and expert on misinformation.

“We need to constantly be aware of how easy it is to manipulate information, to not rush to judgment and to ask questions before we make up our minds about what we see.” 

But what questions do we ask ourselves?

Eric Szeto, an investigative journalist with the CBC, has been working on a project to help the organization build more trust with the audience through forensic investigations of fake images.

He said that AI-generated images often have small details that aren’t quite right. 

“Sometimes people won’t have the right number of fingers or the lighting is too perfect or buildings in the background may blend into one another,” he said. 

 Two fake images show a girl in a boat crying with a puppy.

These images started circulating online during Hurricane Helene. The image on the left shows a puppy with black markings on its muzzle while the image on the right, which appears to show the same little girl, shows a puppy with blond fur on its muzzle. (Image credit: Mike Engleman/X, Larry Avis West/Facebook) 

“If you look at the photo of the girl from Hurricane Helene, her hands are blurry compared to her face, and so are the dog’s paws,” he said. 

He said that an image looking too perfect or having inconsistent lighting may also signal that the image has been AI-generated. 

If you’re still unsure, Silverman said it might be a good idea to investigate more deeply by Googling the image and seeing if other information online supports what you’re seeing. 

“See if you can find other images from this event or location to check, was this person really here? Is there other footage or reporting from the event?”

Silverman said you can also upload the image directly into a reverse image search engine to see where else it exists online and if that provides clues into its origins. 

So why are fake images so dangerous? 

Silverman, who works at ProPublica and is based in Toronto, Ontario, said that it’s never been easier to fake images, and they’re getting more and more convincing. 

“Before, where you might’ve had to have expertise in Photoshop to generate video and images, today you don’t need extra skill or a lot of people to do it,” he told CBC Kids News.

These fakes can then be used to manipulate large groups of people into believing things that aren’t true.

At a small and harmless level, you could manipulate people into believing your hair is blond when it’s really brown. 

A collage of social media posts show different women wearing t-shirts with the phrase Swifties for Trump.

Back in August, former U.S. president Donald Trump posted AI-generated images of Taylor Swift and Swifties supporting his presidency to his social media platform Truth Social. (Image credit: realDonaldTrump/Truth Social) 

At the highest level, Silverman said altered or AI-generated images threaten the democratic systems of countries like Canada.

Democracies are countries where every citizen who is eligible to vote gets a say in who leads the country. 

“Democracy relies on informed citizens voting based on information they’ve been able to gather about candidates that they believe is real,” said Silverman.

But if, for example, people start using fake images to make certain political candidates look bad, it can cause people to vote in a way that they otherwise wouldn’t. 

“It’s already causing people to disengage and say, ‘Well, I don’t know what to trust anymore, so I won’t bother voting and I’ll stay as far from news and politics as possible,’” said Silverman.

 An AI-generated photo shows Katy Perry at the met Gala in a lavish gown.

This is an AI-generated image of Katy Perry at the 2024 Met Gala. She was not there. It was one of several fake photos that swirled online of celebrities after this year's gala. After it went viral, she reposted it to her instagram. (Image credit: katyperry/Instagram)

What else needs to be done?

Outside of personal responsibility, Silverman said that governments, social media platforms and those who create AI tools need to step up. 

Companies that make AI tools, for example, should find ways to flag that images were made with their software. 

“The most common way is with metadata,” he said.

“When the image is created, there is info in the file itself so that when someone uploads it to social media, the platform will know it was AI-generated and can watermark it to notify users.” 

He said that some companies like Meta, which owns Instagram, have already begun labelling their content.

A fake image of a person with a golden bear on Instagram.

Some content on Meta is now labelled with a tag called AI Info to signal that the image has been manipulated. (Image credit: Meta) 

Finally, he said governments also need to create laws around AI images, for example, making it illegal to create fake images of someone without their consent.

Media organizations like the CBC also have a responsibility to authenticate the images they use, which is part of the project Szeto is working on.

“We not only want to investigate and authenticate images, but show the audience how we did it and what tools they use so that they can verify it themselves at home,” he said. 

“That way, when people come to CBC and see images, we can say: ‘Hey, this has been verified and this is reliable.’” 

Have more questions? Want to tell us how we're doing? Use the “send us feedback” link below. ⬇️⬇️⬇️