Liberals' proposed AI law too vague, costly, Big Tech executives tell MPs - Action News
Home WebMail Friday, November 22, 2024, 08:12 AM | Calgary | -12.0°C | Regions Advertise Login | Our platform is in maintenance mode. Some URLs may not be available. |
Politics

Liberals' proposed AI law too vague, costly, Big Tech executives tell MPs

Representatives from Big Tech companies say a Liberal government bill that would begin regulating some artificial intelligence systems is too vague.

Companies argue the proposed law doesn't differentiate between high and low-risk AI systems

The OpenAI logo is seen on a mobile phone in front of a computer screen displaying output from ChatGPT, on March 21, 2023.
Bill C-27 was tabled in 2022 to target what are described as "high-impact" AI systems. (Michael Dwye/The Associated Press)

Representatives from Big Tech companies say a Liberal government bill that would begin regulating some artificial intelligence systems is too vague.

Amazon and Microsoft executives told MPs at a House of Commons industry committee meeting Wednesday that Bill C-27 doesn't differentiate enough between high- and low-risk AI systems.

The companies said abiding by the law as written would be costly.

Nicole Foster, director of global artificial intelligence and Canada public policy for Amazon, said using the same approach for all applications is "very impractical and could inadvertently stifle innovation."

The use of AI by a peace officer is considered high-impact in all cases, she said even when an officer is using auto-correct to fill out a ticket for a traffic violation.

"Laws and regulations must clearly differentiate between high-risk applications and those that pose little or no risk. This is a core principle we have to get right," Foster said.

"We should be very careful about imposing regulatory burdens on low-risk AI applications that can potentially provide much-needed productivity boosts to Canadian companies both big and small."

Microsoft gave its own example of how the law doesn't seem to differentiate based on the level of risk that particular AI systems introduce.

A man gestures with his hands as he speaks to reporters in a press conference room. Two Canadian flags are drapped in the background.
Industry Minister Francois-Philippe Champagne has been offering some information about planned amendments to the bill. (Adrian Wyld/The Canadian Press)

An AI system used to approve a person's mortgage and handle sensitive details about their finances would be considered the same as one that is used to optimize package delivery routes using public data.

Industry Minister Francois-Philippe Champagne has been offering some information about amendments the government expects to put forward to the bill to ensure it is up-to-date.

But in spite of that additional detail, companies said the definitions in the bill are still too ambiguous.

Amanda Craig, senior director of public policy at Microsoft's office of responsible AI, said not differentiating between the two would "spread thinly the time, money, talent and resources of Canadian businesses and potentially mean finite resources are not sufficiently focused on the highest risk."

Bill C-27 was tabled in 2022 to target what are described as "high-impact" AI systems.

But generative AI systems such as ChatGPT, which can create text, images and videos, became widely available to the public only after the bill was first introduced.

The Liberals now say they will amend the legislation to introduce new rules, including one requiring companies behind such systems to take steps to ensure the content they create is identifiable as AI-generated.

Earlier this week, Yoshua Bengio, dubbed a "godfather" of AI, told the same committee that Ottawa should put a law in place immediately, even if that legislation is not perfect.

Bengio, the scientific director at Mila, the Quebec AI Institute, said a "superhuman" intelligence that is as smart as a human being could arrive in just a few years.

Advanced systems could be used for cyberattacks, he said, and the law needs to get ahead of that risk.

AI already poses risks. Deepfake videos, which are generated to make it look like a real person is doing or saying something that they never did, can be used to spread disinformation, said Bengio.