AI-powered disinformation is spreading is Canada ready for the political impact? - Action News
Home WebMail Thursday, November 21, 2024, 05:14 PM | Calgary | -10.8°C | Regions Advertise Login | Our platform is in maintenance mode. Some URLs may not be available. |
Politics

AI-powered disinformation is spreading is Canada ready for the political impact?

Billions of people in more than 40 countries head to the polls this year. Canadians could be among them. But with fake products of generative artificial intelligence spreading online, figuring out what's real and what's not is becoming harder. Is Canada prepared for an AI election?

The rise of deepfakes comes as billions of people around the world prepare to vote this year

Prime Minister Justin Trudeau and Conservative Leader Pierre Poilievre take part in the National Prayer Breakfast in Ottawa on Tuesday, May 30, 2023.
Prime Minister Justin Trudeau and Conservative Leader Pierre Poilievre are seen together in May 30, 2023. Canadians could be headed to the polls this year or next, depending on how much longer the Liberal government's deal with the NDP holds up. (Sean Kilpatrick/The Canadian Press)

Just days before Slovakia's national election last fall,a mysterious voice recording began spreading a lieonline.

The manipulated file made it sound like Michal Simecka, leader of the Progressive Slovakia party, was discussing buying votes witha local journalist. But the conversation never happened; the file was later debunked as a "deepfake" hoax.

On election day, Simecka lost to the pro-Kremlin populist candidate Robert Fico in a tight race.

While it's nearly impossible to determine whether the deepfake file contributed to the final results, the incident points to growing fearsabout the effect products of artificial intelligence arehaving ondemocracy around the world and in Canada.

"This is what we fear ... that there could be a foreign interference so grave that then the electoral roll results are brought into question,"Caroline Xavier, head of the Communications Security Establishment (CSE) Canada's cyber intelligence agency told CBC News.

"We know that misinformation and disinformation is already a threat to democratic processes. This will potentially add to that amplification. That is quite concerning."

Those concerns are playing out around the world this year in what's being describedas democracy's biggest test in decades.

Billions of people in more than 40 countries are voting in elections this year including what's expected to be a bitterly disputed U.S. presidential contest. Canadians could be headedto the polls this year or next, depending on how much longer the Liberal government's deal with the NDP holds up.

"I don't think anybody is really ready," said Hany Farid, a professor at the University of California-Berkeley specializing in digital forensics.

Farid saidhe sees two main threats emerging from the collision ofgenerative AI content and politics. The first is its effect on politicians on their ability todeny reality.

"If your prime ministeror your president or your candidate gets caught saying something actually offensive or illegal you don't have to cop to anything anymore," he said.

"That's worrisome to me, where nobody has to be held accountable for anything they say or do anymore, because there's the spectre of deepfakes hanging over us."

In this Monday, July 1, 2019, photo Hany Farid, a digital forensics expert at the University of California at Berkeley, poses for a photo while taking a break from viewing video clips in his office in Berkeley, Calif. Sophisticated phony videos called deepfakes have attracted attention as a possible threat to election integrity. But a bigger problem for the 2020 U.S. presidential contest may be dumbfakes. The fact that these videos are made so easily and then widely shared across social media platforms does not bode well for 2020, said Farid. (AP Photo/Ben Margot)
Hany Farid, a digital forensics expert at the University of California at Berkeley, takes a break from viewing video clips in his office in Berkeley, California on July 1, 2019. (The Associated Press)

The second threat, he said, is already playing out: the spreadof fake content designed to harm individual candidates.

"If you're trying to create a 10 second hot mic of the prime minister saying something inappropriate, that'll take me two minutes to do. And very little money and very little effort and very little skill," Farid said.

"It doesn't matter if you correct the record 12 hours later. The damage has been done. The difference between the candidates is typically in the tens of thousands of votes. You don't have to move millions of votes."

Cyber intelligence agency prepares for 'the worst'

The consequences are very much on the minds of the experts working within the glass walls of CSE's 72,000-square metreheadquarters in Ottawa.

Last month, the foreign signals intelligence agency released a public report warningthat bad actors will use AI tools to manipulate voters.

"Canada is not immune. We know this could happen," said Xavier. "We anticipate the worst. I'm hoping it won't happen, but we're ready.

"There's lots of work we continue to need to do with regards to education, and ... citizenship literacy. Absolutely, I think we're ready. Because this is what we trained for, this is what we get ready for, this is why we develop our people."

Communications Security Establishment Chief Caroline Xavier says the cyber spy agency is concerned about how foreign actors will use AI generative content.
Communications Security Establishment Chief Caroline Xavier says the cyber espionage agency is concerned about how foreign actors will use generative AI content. (Christian Patry/CBC)

CSE's preparations for an AI assault on Canada's elections include the authority to knockmisleading content offline.

"Could we potentially use defensive cyber operations should the need arise? Absolutely,"Xavier said. "Our minister had authorized them leading up to the 2019 and the 2021 election. We did not have to use it. But in anticipation of the upcoming election, we would do the same. We'd be ready."

Xavier said Canada's continued use of paper ballots in national elections affords it a degree of protection from online interference.

CSE, the Canadian Security Intelligence Service (CSIS), the RCMP and Global Affairs Canadawill also feed intelligence about attempts to manipulate voters todecision makers in the federal government before and during the next federal election campaign.

WATCH: How AI-generated deepfakes threaten elections

Can you spot the deepfake? How AI is threatening elections

8 months ago
Duration 7:08
AI-generated fake videos are being used for scams and internet gags, but what happens when theyre created to interfere in elections? CBCs Catharine Tunney breaks down how the technology can be weaponized and looks at whether Canada is ready for a deepfake election.

The federal government established the Critical Election Incident Public Protocol in 2019 to monitor and alert the public to credible threats to Canada's elections. The team is a panel of top public servants tasked with determiningwhether incidents of interference meet the threshold for warning the public.

The process has been criticized byopposition MPs and national security expertsfor not flagging fake content and foreign interferencein the past two elections. Last year, a report reviewing the panel's worksuggested thegovernment should consider amending the threshold so that the panel can issue an alertwhen there is evidence of a "potential impact" on an election.

The Critical Election Incident Public Protocollikelywill be studied by the public inquiry probingelection interference later this month.

CSE warns thatAI technology is advancing at a pacethat ensuresit won't be able to detect every single deceptive video or image deployed to exploit voters and somepeople inevitably willfall for fake AI-generated content before they head to the ballot box.

According to its December report, CSE believes that it is "very likely that the capacity to generate deepfakes exceeds our ability to detect them" and that "it is likely that influence campaigns using generative AI that target voters will increasingly go undetected by the general public."

Xavier said training the public to spot counterfeit online content must be part of efforts toensureCanada is ready for its next federal campaign.

"The reality of it is ...yes, it would be great to say that there's this one tool that's going to help us decipher the deepfake. We're not there yet,"she said."And I don't know that that's the focus we should have. Our focus should truly be in creating professional scepticism.

"I'm hopeful that the social media platforms will also play a role and continue to educate people with regards to what they should be looking at, because that's where we know a lot of our young people hang out."

A spokesperson for YouTubesaid that since November 2023,it's been requiring contentcreatorsto disclose any altered or synthetic content. Meta, which owns Facebook and Instagram, said this year that advertisers also will have to disclose in certain casestheir use of AI or other digital techniques to create or alteradvertising on political or social issues.

Parliament isn't moving fast enough, MP says

It's not enough to put Conservative MP Michelle Rempel Garner at ease.

"I have over a decade worth of speeches that are on the internet...It'd be very easy for somebody to put together a deepfake video of me," she said.

She said she wants to see a stronger response to the threatfrom the federal government.

"I mean, we haven't even dealt with telephone scamsas a country, right? We really haven't dealt with beta-versionphone scams. And now here we are with very sophisticated technology that anybody can access and come up with very realistic videos that are indistinguishable [from] the real thing," said the MP for Calgary Nose Hill.

Conservative member of Parliament Michelle Rempel Garner has been raising concerns about AI in the House of Commons.
Conservative member of Parliament Michelle Rempel Garner rises during question period in the House of Commons on Parliament Hill in Ottawa on Friday, Oct. 2, 2020. (Sean Kilpatrick/The Canadian Press)

Those fears convinced Rempel Garner to helpset up a bipartisan parliamentary caucus on emerging technology to educate MPs from all parties about the dangers, and opportunities, of artificial intelligence.

"There's some really tough questions that we're going to have to ask ourselves about how we deal with this, but also protect free speech. It's just something that really makes my skin crawl. And I just feel the sense of urgency, that we're not moving forward with it fast enough," she said.

U.S. President Joe Biden, meanwhile, has introduced a new set of government-drafted standards on watermarking AI-generated contentto help users distinguish between real and phoney content.

Rempel Garner said a watermark initiative is something Canada also could do "in short order."

A spokesperson for Public Safety Minister Dominic LeBlanc suggested the government will have more to say on this subject at some point.

"We are concerned about the role that artificial intelligence could play in helping persons or entities knowingly spread false information that could disrupt the conduct of a federal election, or undermine its legitimacy," said Jean-Sbastien Comeau.

"We are working on measures to address this issueand will have more to say in due course."

AI companies need to takeresponsibility, expert says

Farid said regulations and legislation alone will not tame the "big bad internet out there."

Companies that permit users to create fake content could also require that such content includea durable watermark identifying it as AI-generated, he said.

"I would like to see the open AI companies be more responsible in terms of how they are developing and deploying their technologies. But I'm also realistic about the way capitalism in the world works," Farid said.

Three portrait images showing a woman who has had her face changed using AI, picture of Vladimir Putin with his face circled and an image of Mark Zuckerberg with his mouth circled.
A screen shows different types of deepfakes. The first is a face-swap image, which in this image sees actor Steve Buscemi's face swapped onto actress Jennifer Lawrences body. In the middle, the puppet-master deepfake, which in this instance would involve the animation of a single image of Russian President Vladimir Putin. At right, the lip-sync deepfake, which would allow a user to take a video of Meta CEO Mark Zuckerberg talking, then replace his voice and sync his lips. (Submitted by Hany Farid)

Faridalso called for making date-time-place watermarks standard on phones.

"The idea is that if I pick up my phone hereand I take a video of police violence, or human rights violations or a candidate saying something inappropriate, this device can record and authenticate where I am, when I was there, who I am and what I recorded," he said.

Farid said he sees away forward through a combination oftechnologicalsolutions,regulatory pressure, public education andafter-the-fact analysis ofquestionable content.

"I think all of those solutions start to bring some trust back into our online world, but all of them need to be pushed on simultaneously," he said.

Friends don't let friends fall for deepfakes

Scott DeJong is focused on the public education part of that equation. The PhD candidate at Montreal's Concordia University created a board game to show how disinformation and conspiracy theories spread and has taught young people and foreign militaries how to play.

As AI technology advances, itmight soon be impossible to teach people not to fall for fake content duringelections. But DeJongsaidyou can still teach people to recognize content as misleading.

"If you see a headline, and the headline is really emotional, or it's manipulative, those are good signs [that], well, this content is probably at least misleading," he said.

Scott DeJong plays his game 'Lizards & Lies,' where players ether spread or try and stop the spread of conspiracy theory on social media.
Scott DeJong plays his game 'Lizards & Lies,' where players try to either spread or stop conspiracy theories on social media. (Jean-Francois Benoit/CBC)

"My actual advice for people during ... election timesis to try to watch things live. Because it's a lot harder to try to see the deepfakes or the false content when you're watching the live version," he said.

He also said Canadians can do their part by reaching out to friends and families when they post disinformation especiallywhenthose loved ones refuse to engage with reputable mainstream news sources.

"The optimist in me likes to think that no one is too far gone," he said.

"Don't go in there accusing them or blaming them, but [ask] them questions as to why they put that content up. Just keep asking, why did you think that post was important? What about that post did you find interesting? What in that content engaged you?

"From there, you can peel back layers of the ideas and perspectives that led to them sharing that."

With reporting from Sarah Sears

Add some good to your morning and evening.

Your weekly guide to what you need to know about federal politics and the minority Liberal government. Get the latest news and sharp analysis delivered to your inbox every Sunday morning.

...

The next issue of Minority Report will soon be in your inbox.

Discover all CBC newsletters in theSubscription Centre.opens new window

This site is protected by reCAPTCHA and the Google Privacy Policy and Google Terms of Service apply.