When technology discriminates: How algorithmic bias can make an impact - Action News
Home WebMail Saturday, November 23, 2024, 06:01 AM | Calgary | -12.2°C | Regions Advertise Login | Our platform is in maintenance mode. Some URLs may not be available. |
ScienceAnalysis

When technology discriminates: How algorithmic bias can make an impact

Algorithms are increasingly being turned to as a way for companies to make objective decisions, including ones that have complex social implications. But they're not always as unbiased as you might think.

Research has shown that algorithms can actually perpetuate even accentuate social inequality

Algorithms are increasingly being turned to as a way for companies to make objective decisions, including ones that have complex social implications and the potential to have a profound impact on people's lives. But they're not always as unbiased as you might think. (Reuters)

When it comes to issues like whether to hire someone for a job, give them a mortgage, or even identify them as a suspect in a crime, human bias can have far-reaching ramifications.

And as more industries turn to technology and specifically algorithms to cut costs and increase efficiency, a new consideration arises: When it comes to some of those difficult decisions, can algorithms really yield fairer results?

Algorithms should be immune to the pitfalls of human bias. But despite their seemingly neutral mathematical nature, algorithms aren't necessarily any more objective than humans.

In fact, without proper checks and balances, their use could perpetuate, and even accentuate, social inequality.

"The prerequisite to algorithmic decision-making is having a ton of data," says futurist and CBC commentator Jesse Hirsh, who recently completed a Masters in media production, with a focus on algorithmic media and transparency.

In other words, algorithms are everywhere.

Device of our data-rich world

Any organization with lots of data at its disposal is likely using algorithms to sort that information, organize it, and ultimately make decisions based off it.

We already know our Facebook timelines are organized based on what the algorithm deems most relevant to us. And we may take for granted the fact that Netflix uses algorithms to help suggest what movie or television show we want to watch next.

Algorithms are used to help make decisions about everything from insurance rates to credit scores, employment applications to school admissions. (Manjunath Kiran/AFP/Getty Images)

But what might surprise some people is just how many other industries and sectors are already using algorithms to help make decisions. And it's not just trivial decisions, but ones that have complex social implications and the potential to have a profound impact on people's lives ranging fromhiring and financial lending, to criminal justice and law enforcement.

Organizations are increasingly turning to algorithms to help make decisions about things like insurance rates, credit scores, employment applications and school admissions, Hirsh says.

"There's also tons of legal ones that look at potential court decisions, tax issues, and in the U.S., parole."

Trusting machines more than people

The impetus to turn to algorithms is clear; we want these systems to be just and fair. Getting a job should be based on merit, not gender, and getting a loan should be based on factors like your credit, not your skin colour.

The techno-utopian belief is that an algorithm can be more objective because it doesn't carry with it all of the human baggage of preconceptions or prejudices. After all, it's just code and data.

While people are often willing to put trust in mathematical models, believing it will remove human bias from the equation,thinking of algorithms as objective is a mistake, says mathematicianCathy O'Neil,author of the new book,Weapons of Math Destruction.

Algorithms which she equates to "shiny new toys" that we can't resist playing with replace human processes, but aren't held to the same standards. They're often opaque, unregulated and uncontestable.

Facebook uses algorithms to serve up in its news feed what it thinks you'll be most interested in. But exactly how it does that is a closely guarded secret. (Dado Ruvic/Illustration/File Photo/Reuters)

According to Hirsh, our desire to believe computerized systems can offer a cure-all to overcoming humanshortcomings "reflects the mythology of technology and our desire to give these systems power that they do not deserve."

In fact, research shows algorithms can actually accentuate theimpactof prejudice.

Without contextual awareness or an understanding of existing social biases, algorithms see all inputted data as being equal and accurate. But as Hirsh points out, an algorithm will be biased when "the data that feeds the algorithm is biased."

So when a system learns based on an inherently biased model, it can, in turn, incorporate those hidden prejudices. For example, in awell-known studyin which recruiters were given identical resumes to review, they selected more applicants with white-sounding names.

"If the algorithm learns what a 'good' hire looks like based on that kind of biased data, it will make biased hiring decisions," O'Neilwrotein an article for the Harvard Business Review, referencing the study.

Algorithms are informed by our own prejudices, beliefs and blind spots all the way from the design of the algorithm itself, to the data it is inputted with. Bad data in equals bad data out.

For example, as an article onFiveThirtyEightpoints out,black people arearrested more oftenthan white people, even when they commit crime at the same rates.But algorithms don't recognize that context; they often see all data as being equal.

Should augment, not override

Still, with the proper checks and balances, algorithms can be beneficial.

Knockri is a Canadian startup that helps companies automate their candidate-screening process by shortlisting applicants, based on internal core competencies, to identify the best fit for an in-person interview.

When algorithms are designed with recognition of the tendency for humans to exhibit bias in the hiring process, company co-founder and COO Maaz Rana says they can assist in talent acquisition "by consistently presenting an objective measure of someone, so it can be used as a reference tohelp make smarter and better hiring decisions."

Ultimately, he adds, the algorithm is there to supplement, not replace, human intelligence.

"We don't want an algorithm making the final hiring decision; we want it to help people make more informed and better hiring decisions."

In a well-known study, in which recruiters were given identical resumes to review, they selected more applicants with white-sounding names. So 'Greg' got significantly more callbacks than 'Jamal.' When algorithms are informed by our own prejudices, researchers have found bias can be built into the process itself. (Matt Rourke/Associated Press)

When it comes to mitigating the presence of bias in algorithms we rely on, Rana offers a few solutions. For starters, he says,create datasets that have been built from the ground up with a focus on inclusion, diversity and representation.

"Make sure to account for outliers. Scraping data from the internet is easy, but not always the best approach, since it's not difficult to have pre-existing biases creep their way into your algorithm it's important to be mindful."

Rana also suggestsadding in a manual quality-control process. "A human touch is essential for quality control, to identify whether any biases are being developed when introducing new data to the AI."

Ironically, one way to avoid algorithmic bias is to stop making decisions based solely on algorithms, and to bring in outside individuals to audit these processes especially when making complex social decisions.

"Some things really ought to remain in the hands and minds of human beings," says Hirsh.

On one hand, it can seem as though we're going in circles: first developing algorithms to help mitigate human bias, then reintroducing humans back into the process to keep the algorithms in check.

But in fact it could be a sign that we're closer to seeing light at the end of the tunnel, underscoring the need for transparency and accountability as a means of countering biases in these new technological solutions and in ourselves.