A former Alberta justice minister claims videos of him are 'fake.' Not everyone agrees - Action News
Home WebMail Friday, November 22, 2024, 07:24 AM | Calgary | -12.8°C | Regions Advertise Login | Our platform is in maintenance mode. Some URLs may not be available. |
Calgary

A former Alberta justice minister claims videos of him are 'fake.' Not everyone agrees

Near the end of September, a series of videos were posted to social media that purported to show some familiar figures in Calgarys political and legal worlds taking turns performing racist Indigenous caricatures. They would later claim those videos were fake. Experts say claims of falsity in situations like this are hard to prove because the technology is debatable, even unreliable and hints at a more significant problem to come.

Deepfakes are becoming more worrisome and frequent. But experts say false claims are, too

Split screen shots. One of a man on the left and another of a woman, with a digital green overlay.
At left, a video purported to picture Jonathan Denis, Alberta's former justice minister, in which it appears Denis makes phone calls while performing a caricature of an Indigenous person. At right, a file photo of a green wireframe model covering an actor's lower face during the creation of a deepfake, which uses artificial intelligence to create convincing faked footage of real people. (Twitter/Reuters TV via Reuters)

UPDATE (June 18, 2024):In September 2023, the Alberta Court of King's Bench signed an order in which it states the videos in this story were not authentic. That order was the result of an affidavit filed in an undefended action in which an expert claims the videos may have been manipulated using deepfake techniques. CBC consulted with other experts who concluded such determinations are extremely hard to make.


Near the end of September, a series of videos were posted to social media that purported to show some familiar figures in Calgary's political and legal worlds taking turns performing racist Indigenous caricatures.

One video appeared to take place at a barbecue, and another around a table with open bottles of alcohol and empty plates. The men purportedly pictured were Jonathan Denis, Alberta's former justice minister under the Progressive Conservative government from 2012 to 2015, and Calgary-based businessman and political activist Craig Chandler.

The videos spread quickly through social media to the point where Denis felt compelled to respond.

At the time, he offered an apology with a caveat. Later, he would claim the videos were fakes, and the duo would submit what they called proof of that claim.

But experts say claims of falsity in situations like this are hard to prove because the technologyis debatable, even unreliable and hints at a more significant problem to come.

The initial response

After the four videos floated around social media for some time, Denis sent a statement to local media outlets, writing that while he had no recollection of the events, it was possible they took place years ago while he was under the influence of alcohol. He said he apologized unreservedly to anyone he offended if they depicted "real events." It would be his sole statement on the matter at the time.

Chandler, meanwhile, agreed to an interview with CBC News. He said the video of the barbecue was taken during a private function withhis close friends. He said he was trying to cheer his friend Denis up by joking about Brocket 99, a fake radio show produced in Lethbridge, Alta., in the late 1980s, which was based on racist stereotypes of First Nations people.

It was ridiculous, Chandler said, that this had become an issue that he was apparently not allowed to joke about an issue within the confines of his own home at a private barbecue. It was the same thing Dave Chappelle had to go through, he said, this "cancel culture."

But Chandler would say something else during that interview. He said Denis had a contact in Hollywood who had done an audit of the video. That contact, Chandler said, had determined that though the video was "correct," and the words had been said, the Indigenous accent had been "manipulated" and "exaggerated."

"Were the words said? Yeah. Was the accent there? Don't know," Chandler said at the time.

Exactly a month later, it was Calgary Ward 13 Coun. Dan McLean who broadly apologized for "mistakes in the past" after other videos surfaced, purportedly involving McLean along with Chandler and Denis, which also included racist mockery of Indigenous people. He would later step back from council committees and boards and sit with a circle of Indigenous elders to "learn to grow, change and be better."

two people pose for a photo
Ward 13 Coun. Dan McLean released a video in late October, standing next to Alice Marchand, who he said was a dear friend. After videos were posted to social media that showed a group of men participating in racist mocking of Indigenous people, McLean stepped back from council committees and boards. (Facebook)

But though McLean was apologizing and stepping back, Denis' law firm Guardian Law Firm was taking a different position: that the videos were fake. The firm told the Calgary Herald and the Western Standard that it had evidence the videos had been doctored and added that the police were engaged in the matter.

Three days after McLean stepped down from city council committees, a new email landed in news agency inboxes, sent by Chandler. The subject line declared: "Videos reviewed by independent agency prove videos are fake."

He forwarded the results of an analysis done by Reality Defender, a "deepfake" detection platform headquartered in New York which was incubated by the AI Foundationand launched as a corporation in February. The platform doesn't involve human analysis, instead utilizing a tool that detects for manipulated media.

Deepfakes use artificial intelligence to create convincing faked footage of real people. You may have seen a series of videos involving a fake Tom Cruise on the social media video platform TikTok pulling off some impressive magic tricks, or a fake Elon Musk being held hostage in a warehouse.

But experts are becoming increasingly worried that the growing prevalence and sophistication of these "deepfakes" is making detection all the more difficult.

As deepfakes become more convincing, there's more of an opportunity for them to be used to destroy reputations with words and images that are not real. By the same token, it is also easy for people legitimately caught on tape to falsely claimit never happened, and to allege that the visual evidence was somehow doctored.

So what was the case with Denis, Chandler, and McLean? Denis and Chandler contend that they are the victims of faked videos, while McLean didn't respond to CBC News' request for comment.

Deepfakes and probabilities

Identifying and removing "manipulated" media has been an urgent priority for companies like Meta over the past number of years. However, the category of "manipulation" is broad it can involve using simple software to add blurs to photographs or to make audio more clear. On the flip side, manipulation also involves using artificial intelligence to create "deepfakes."

In his release, Chandler said he had submitted thevideos to Reality Defender. Ben Colman, CEO of Reality Defender,said its platform determined that the four videos were "probabilistically fake."

"We live in the world of probabilities. And so we are comfortable saying that it's highly likely that the assets are fake, though we do not have the originals," said Colmanin an interview, adding that the removal of conversion or compression would not change the company's conclusion.

The company uses its platform alone, and no experts review its conclusions, something Reality Defender views as an asset because it believes synthetic media can fool humans. One part of its analysis determined that two videos were 78 per cent "likely manipulated," while two others were assessed at 66 and 69 per cent.

Despite Chandler's contention at the time that only the Indigenous accent had been exaggerated in a video in which he had appeared not the video or the words spoken Reality Defender's initial analysis provided to CBC News only showed the video results, and did not show if audio was tested.

In a follow-up interview, Colman said its platform tested for the audio, which he said was manipulated in the style of a Nancy Pelosi video in which the U.S. House speaker's audio was slowed down to make her sound impaired.

Nancy Pelosi stands behind a podium, speaking.
U.S. House Speaker Nancy Pelosi speaks on the House floor at the Capitol in Washington D.C., on Nov. 17. In 2019, a video of Pelosi manipulated to make it appear as though she was impaired picked up millions of views on social media. (AP Photo/Carolyn Kaster)

Upon being contacted to share the audio reports, Denis' law firm said they had not received them, adding that Reality Defender's conclusion was "definitive." Later that day, they shared the reports, which listed that Reality Defender's "all-purpose advanced speech feature spoof detector" had determined the audio was "99 per cent likely manipulated."

Colman said he couldn't speak directly to Chandler's claim that accents had been exaggerated.

"[Our engine] just detects that it was manipulated. The sentiment, or the reason for it, is nothing that we can speculate on," Colman said.

Denis' law firm did not respond to a follow-up questionrequesting more information on what, specifically, the two were alleging had been faked in the video.

A second analysis

In the days and weeks after Chandler sent out the press release contending the videos had been faked, former Calgary Conservative MP Joan Crockatt, speaking on behalf of Denis through her Crockatt Communications consultancy company, contacted CBC News on multiple occasions with requests to take the video down.

These are definitive findings.- Joan Crockattof Crockatt Communications, speaking on behalf of Jonathan Denis

When CBC News declined to take down the videos, Crockatt submitted a second analysis, from the platformDeepware, which ran two of the videos through four different models.

One model, the face animation app Avatarify, indicated that it detected a deepfake on one of the videos at 99 per cent probability. However, none of thethree other models listeddetected a deepfake.

"These are definitive findings," Crockatt wrote in a statement, highlighting the result from Avatarify.

Contacted for comment by CBC News, Zemana, the Turkey-based company that runs Deepware, requested copies of the analysis.

Upon viewing the results, Yaizhan Atmaca, CTO of Zemana, repudiated the earlier results, saying the Avatarify model had in fact returned a false positive because of the high level of compression on the video.

"Nobody can say, 100 per cent [certainty] on such a bad video," Atmaca said, adding that the AI models the company usescan oftenmake mistakes.

Contacted for comment on the model returning a false positive, Denis' law firm said they had not had any subsequent communication from Deepware.

When asked whether Deepware informs its clients if its model producesa false positive, Atmaca pointed to a note present on the company's results page, which reads, "As Deepware Scanner is still in beta, the results should not be treated as an absolute truth or evidence."

What's fake, what's real

CBC News asked another group, the Media Verification (MeVer) team, to look at the videos posted to Twitter. They applied their own deepfake detection service and three other detection algorithms to analyze the videos. Their analysis suggested that the possibility of the videos being deepfakes was very low.

There are some caveats, said Symeon Papadopoulos, principal researcher at the Information Technologies Institute, and head of the MeVer group: the field of deepfake generation is rapidly evolving, and the possibility of a very new sophisticated model, undetectable by state-of-the-art detectors such as the one used in the analysis, is always possible. In addition, though there are no obvious signs, researchers can't exclude other kinds of video tampering using conventional video editing tools.

That said, it would be surprising if the videos were fakes, Papadopoulos said. They don't bear any of the usual artifacts of deepfake videos artifacts being visual clues left behind in the finished product by the deepfake generation model and some angles at which the videos are shot are very challenging to fake.

Other experts in the field doubt the accuracy of online verification platforms altogether.

Hany Farid is a professor who specializes in digital forensics at the University of California, Berkeley. Healso sits on TikTok's content advisory board.

A member of the Microsoft-led team that pioneered PhotoDNA, which is used globally to stop the spread of child sexual abuse imagery, Farid was named a lifetime fellow of the National Academy of Inventors in 2016 and has been referred to as the "father" of digital image forensics.

A man in dark suit and blue dress shirt.
Hany Farid, a computer science professor at the University of California, Berkeley, sits on TikToks content advisory panel and has been referred to as the father of digital forensics. (Submitted by Hany Farid)

Farid viewed the videos frame-by-frame and said they indicated no signs of manipulation or synthesis. He said he didn't think online platforms were sufficiently accurate to say anything definitive, particularly not on low-quality resolution videos like those in question.

He likened the situation the men initially offering vague apologies, then later claiming the videos were fake to Donald Trump's conversation with Billy Bush of Access Hollywood in 2005, in which he bragged his fame enabled him to grope women.As a candidate for president in the 2016 election, Trump apologized for those comments, but later questioned their authenticity.

The art and science of the deepfake

Farid said the devil is in the details when it comes to online resources that analyze video. Most techniques are trained on very specific sets of videos, not handheld videos, for example.

State-of-the-art detectors have relatively low accuracies, Farid said, at a rate of around 90 per cent. That might sound impressive, but it means the detectors are making a lot of mistakes, and will say that real things are fake, and vice versa.

Plus, running videos through different techniques provides wildly different answers, from not at all fake, to maybe fake, to definitely fake.

"At that point, let's stop calling this science. I mean, now we're just making stuff up," he said.

Farid said he didn't have a lot of confidence in the results of the analyses provided, adding that the automatic techniques simply are not close to being sufficient enough to say with certainty what's real and what's fake, particularly because in the videos provided, where there's nothing obviously wrong in terms of the types of synthesis artifacts one would expect to see.

"I think there's something dangerous about saying, 'Well, just upload the video, and we'll tell you what's what.' The field is not there," Hany said. "These automatic techniques simply don't exist today. They're not even close to existing."

At that point, let's stop calling this science. I mean, now we're just making stuff up.- Hany Farid,University of California, Berkeley computer science professor

For example, in the videos, which are handheld, grainy, low-resolution and shot from a distance, the individuals involved often turn away from the camera.

"Even the best deepfakes go look at the Tom Cruise TikTok deepfakes, and slow down and watch frame by frame by frame by frame, and you will see little artifacts, because synthesis is very hard," Farid said.

A man, his eyes cast downwards as though reading something.
A still from the disputed video, which appears to show former Alberta Justice Minister Jonathan Denis. Among other reasons, experts say this video would be difficult to fake, given the wine bottle occluding part of the face, which would typically lead a deepfake generation model to create artifacts around that area. (Twitter)

Farid explained that there are three general categories of deepfakes. The first is the face swap deepfake, which is probably what most people are familiar with. The Tom Cruise deepfake is an example of this, which involves a person moving and sees their face replaced, eyebrow to chin, cheek to cheek, with a face swap.

A lip-sync deepfake would take a video of someone talking and create a new audio stream, either synthesized or impersonated, and replace that person's mouth to be consistent with new audio.

A puppet master deepfake, finally, would take a single image of a person and animate a representation of that person based on what a "puppet master" did in front of a camera.

Each of these techniques has its strengths, but each has its weaknesses, too, which introduce artifacts. For example, the lip sync deepfake can create a "Frankenstein monster" effect when the mouth is doing one thing and the head another, while a puppet master deepfake has trouble simulating certain effects, like a hanging strand of hair bouncing up and down while someone nods their head.

Three portrait images showing a woman who has had her face changed using AI, picture of Vladimir Putin with his face circled and an image of Mark Zuckerberg with his mouth circled.
There are three different categories of deepfake today, according to Hany Farid, a computer science professor at the University of California, Berkeley. At left, the face-swap image, which in this image sees actor Steve Buscemi's face swapped onto actress Jennifer Lawrences body. In the middle, the puppet-master deepfake, which in this instance would involve the animation of a single image of Russian President Vladimir Putin. At right, the lip-sync deepfake, which would allow a user to take a video of Meta CEO Mark Zuckerberg talking, then replace his voice and sync his lips. (Submitted by Hany Farid)

All of that means the scenes depicted in the Denis and Chandler videos would be very difficult to fake. While not impossible, the videos are not shot in the form most of the best deepfakes tend to take with today's technology newscasters or politicians standing in front of a camera, not moving a lot, not occluding the face.

"You should never say never. It's dangerous. Everything is possible, of course. But you have to look at likelihoods," Farid said. "We've enumerated the fact that all these different automated techniques are all over the place in terms of what they're saying.

"But the knowledge of how these things are made, how difficult it would be to make them, I think it's extremely unlikely that these are deepfakes."

As for claims that the audio was the part of the video that had been manipulated?

"Is it possible that somebody took that recording, took the audio of him and put it through some type of morphing, or modulation to change his intonation or his accent? Sure, that's possible," Farid said.

"But I don't know a voice modulator that makes you sound insulting."

The implications moving forward

Farid said that though the common perception is that deepfakes today are advanced enough to create any reality, the technology hasn't yet reached that point. He said that today, people claiming videos are fake is a bigger problem than actual faked videos.

"It's what's termed the liar's dividend. That when we enter a world where anything can be manipulated or synthesized, well, then we can dismiss inconvenient facts," he said.

"We can say a video of me doing something illegal or inappropriate or offensive, fake. Human rights violations, fake. War crimes, fake. Police brutality, fake. And that's really dangerous."

Contacted for comment after looking into the videos in more detail, Denis' law firm said theprevious statement would be Denis' "last and final" on the matter, and asked: "Does the CBC want to continue to contribute to online harassment by posting falsified videos on its website?"

A man in a dark suit and a tie speaks into a microphone in a school gymnasium with other people milling about in the background.
A file photo of Calgary-based businessman and political activist Craig Chandler. Though he initially said only an accent had been exaggerated in a disputed video posted to social media, he later said new information had led him to question his original statements. (Terri Trembath/CBC)

Chandler agreed to a follow-up interview, in which he said Calgary police and Alberta RCMP investigations were ongoing into the person who "filmed and then manipulated these videos." A spokesperson with RCMP said it would not confirm whether or not an investigation existed due to privacy, while Calgary police would only say it was "currently investigating various allegations" but would not provide further comment.

Though he initially said only the accent had been manipulated, Chandler said new information has led him to question his initial statements to CBC News. He said he couldn't clarify exactly what had been manipulated in the videos, based on advice from his legal counsel.

"There could be some footage that's real. But the content and the context may not be," he said.

He said that this story "had legs" and was not going away, but that he was limited in what he could say based on advice from counsel.

"I think the people who are going to determine it are not these companies, but the law," he said.