How CBC News will manage the challenge of AI - Action News
Home WebMail Thursday, November 21, 2024, 10:56 PM | Calgary | -10.9°C | Regions Advertise Login | Our platform is in maintenance mode. Some URLs may not be available. |
News Editor's BlogEditor's Note

How CBC News will manage the challenge of AI

Our guidelines on the use of AI are preliminary and subject to change as the technology and industry best practices evolve. What wont change is our commitment to fact-based, accurate, original journalism done by humans for humans.

Bottom line: you will never have to question whether a CBC News story is real or AI-generated

ChatGPT
The rising popularity of generative AI programs like ChatGPT will disrupt society in ways that are still difficult to imagine, including in CBC's work in public service journalism. (Shutterstock)

We use this editor's blog to explain our journalism and what's happening at CBC News. You can find more blogs here.


Last week, we provided CBC/Radio-Canada journalists with some preliminary guidance on how we will use artificial intelligence in our journalism. We want to share that guidance widely and publicly so that you, the people we serve, know exactly how we're managing this challenging new technology.

Maintaining your trust in our work and our journalistic standards is at the heart of our approach.

First, language matters in any discussion about AI, which is a broad term used to describe a variety of applications.

To be clear, many forms of AI are already baked into much of our daily work and tools. Suggested text, auto-complete, recommendation engines, translation tools, voice assistants all of these fall near or under the broad definition of "artificial intelligence."

What has made headlines and raised many questions in our newsrooms lately has been "generative AI," a version of the technology that uses machine learning on vast amounts of data to produce high-quality original text, graphics, images and videos. Consumer-friendly versions of generative AI tools like ChatGPT and DALL-E have increased the public's awareness of the incredible power and risks of this technology.

As with the emergence of any significant new technology, there are both opportunities and dire warnings about what's to come. While the future is uncertain, it's clear AI will disrupt society in ways that are still difficult to imagine, including in our work in public service journalism.

Grappling with AI

CBC/Radio-Canada is already an industry leader on standards and best practices. We are a founding member of Project Origin, which aims to set up provenance standards and a process for the authentication of original media (i.e. to ensure people know what they're seeing or hearing was actually produced by its purported source).

And in February, CBC/Radio-Canada signed on to a first-of-its-kind framework for the ethical and responsible use of synthetic media, which is any media that has been fully or partly generated by AI.

But new journalism-specific questions are emerging nearly every day about the use of the technology. Inside CBC, we've grappled with thorny AI-related questions such as: Can I obscure the identity of a confidential source by creating an AI-generated version of them? Can I use facial recognition software in my investigative journalism? Can I recreate a host's voice to illustrate the generative power of AI? Can I use ChatGPT in my research?

Man in green sweater gestures to offscreen audience while giving a presentation.
Sam Altman, the CEO of Open AI, the company behind ChatGPT, has called on global co-operation to regulate the use of artificial intelligence. ( Jason Redmond/AFP/Getty Images)

Meanwhile, AI-related controversies have rippled through the wider news industry. American tech media publisher CNET was forced to correct dozens of AI-generated articles it had published without human oversight. Buzzfeed saw its stock price soar after announcing it would use generative AI to create content, although a few months later it shuttered its news division and laid off 15 per cent of its staff.

The editor-in-chief of a German tabloid was fired for publishing an AI-generated interview with Michael Schumacher, the Formula One racing legend who has not been heard from since he suffered a major brain injury in a 2013 skiing accident. And the Irish Times apologized last month for "a breach of the trust" after running an AI-generated opinion piece submitted by a hoaxster.

Commitment to trust and transparency

At the heart of the CBC/Radio-Canada approach will be the principles of trust, transparency, accuracy and authenticity that are already core to our journalistic standards and practices (JSP).

The bottom line: you will never have to question whether a CBC News story, photo, audio or video is real or AI-generated.

Here's what that means in practice:

  • No CBC journalism will be published or broadcast without direct human involvement and oversight.

  • We will never put to air or online content that has not been vetted or vouched for by a CBC journalist.

  • We are mindful of the significant increase in deep-faked audiovisual and text content, requiring a heightened level of skepticism and verification in our journalism.

  • We will not use or present AI-generated content to audiences without full disclosure. No surprises: audiences will be made aware of any AI-generated content before they listen, view or read it.

  • We will not use AI-powered identification tools for our investigative journalism (i.e. facial recognition, voice matching) without advance permission of our standards office, acting on my behalf.

  • We will never rely solely on AI-generated research in our journalism. We always use multiple sources to confirm facts.

  • We will not use AI to recreate the voice or likeness of any CBC journalist or personality except to illustrate how the technology works, and only then in exceptional circumstances and only with the advance approval of our standards office and the approval of the individual being "recreated."

  • We will not use AI to generate text or images for audiences without full disclosure and only with the advance approval of the standards office.

  • We will not use AI to generate voices or a new likeness for confidential sources whose identity we're trying to protect. Instead, we will continue practices well understood by audiences, such as voice modulation, image blurring and silhouette. In all cases, we are transparent and clear with audiences about how we've altered original content.

  • We will not feed confidential or unpublished content into generative AI tools for any reason.

We've told our journalists these guidelines are preliminary and subject to change as the technology and industry best practices evolve. And evolution is most certainly guaranteed in this fast-moving field.

What won't change is our commitment to fact-based, accurate, original journalism done by humans for humans.