Rise of the socialbots: They could be influencing you online - Action News
Home WebMail Saturday, November 23, 2024, 04:52 AM | Calgary | -12.0°C | Regions Advertise Login | Our platform is in maintenance mode. Some URLs may not be available. |
Science

Rise of the socialbots: They could be influencing you online

A new breed of computer programs called socialbots are now online, and they could be used to influence online political discourse.

This past weekend, Canadians tweeted almost 18,000 election-related messages in 48 hours, according toanalysisby Ottawa-based digital public affairs strategist Mark Blevis. But and this isn't as big a "but" as you might think what if some of those messages weren't written by human beings?

A new breed of computer programs called socialbots are now online, and they could be used to influence online political discourse.

What's a socialbot? Basically, it's a piece of automated software that controls a social media account.

Now, automated social networking accounts are nothing new. For instance, CBC has several Twitter accounts that do nothing but automatically post headlines with links to news stories. Or I can sign up for weather updates on Twitter that are published by a robot. And once, I interviewed the creator of a robotic toilet that automatically tweeted with every flush.

But socialbots are different. Socialbots hide the fact that they're robots. Many are specifically programmed to infiltrate online communites and pass themselves off as human beings. And they're out there in the wild, right now.

If you're like me (a human being with a Twitter account), you can post messages. You can reply to messages. You can re-post, or retweet others' messages. You can follow other users.

Imagine hundreds or thousands of autonomous software personas, each programmed to infiltrate and influence popular political opinion.

Socialbots mimic all of these actions in an effort to blend in; in other words, to appear human. They reply to tweets. They retweet popular messages. Some of them even appropriate others' tweets. There's anold New Yorker cartoon: "On the Internet, nobody knows you're a dog." Socialbots are a bit like that, but in this case, on Twitter, nobody knows you're a computer program.

For example, back in 2008, computer engineering students Zack Coburn and Greg Marracreated a socialbot called @trackgirl, designed to infiltrate a group of running enthusiasts. When @trackgirl started following people on Twitter, they followed her back. As Marrawrote on his blog, @trackgirl started tweeting about her marathon training, and "she wove her way into the community. One day trackgirl tweeted that she had fallen and hurt her knee. Her followers immediately replied with concern, asking if she was ok." People had developed some level of emotional connection @trackgirl without knowing she was a robot.

That's what I find so fascinating and disturbing about social robots online. Imagine hundreds or thousands of autonomous software personas, each programmed to infiltrate and influence popular political opinion.

For his perspective on this, I called Tim Hwang. He's the co-director of the Web Ecology Project, a research community that recently held aSocialbots coding competition.

Tim told me that socialbots can be programmed for a variety of desired outcomes. For instance, he's seen bots designed to create new connections between users. "One of these bots is very prone to introducing people to one another. And we actually find that these bots have a really powerful influence on getting those people to talk to one another."

As Tim explained, the word "coalition" kept coming to mind.

When sites like Twitter and Facebook are so often used as a barometer for public opinion, what does it mean when some of the participants are silicon?

Socialbots could also be programmed for the opposite effect: to disconnect and disrupt existing groups. This software, and the popularity of online social networks, "opens up the possibility for these bots to have an aggregate impact on the way that real humans connect online," he said.

This also has huge potential implications for election news coverage. When sites like Twitter and Facebook are so often used as a barometer for public opinion, what does it mean when some of the participants are silicon?

As far as I know, no Canadian politicians or parties are using socialbots. From a technical perspective, the software required to launch a socialbot campaign is available, open source, and free.

From a legal perspective, Canada's Elections Act (not surprisingly) makes no explicit reference to the use of automated online personas. However, the act does have rules about reporting election spending, and rules about collusion that could prevent politicians from employing such a strategy.

If a politican were to try to influence public opinion through software and it came to light, I can't imagine it'd be anything but scandalous. For example, during Toronto's last mayoral election, a member of Rob Ford's team allegedly used a fictional Twitter account to mislead a voter into handing over incriminating material. Many were critical of that strategy.

Public affairs strategist Mark Blevis calls the prospect of socialbot armies "extraordinarily creepy. I would say it's unethical."

Though I suppose that creepiness would only be apparent if a socialbot failed and was exposed as such.

Public socialbot projects have been small so far, butthey're scaling up quickly. Researcher Tim Hwang says he's working on a "large-scale social architecture project" that will involve 10,000 Twitter users over the next three to six months. "The idea is to create a bot-constructed social bridge. So basically, these two groups of 5,000 users will become more and more connected without being aware that this aggregated effect is happening."

So then, how do you tell the difference between a human Twitter user and a socialbot?

This can be tricky, as socialbots are designed to mimic human behaviour. And if you ask a bot whether it is indeed a bot, you're not likely to get a truthful response.

Hwang's advice? Try to hold an extended conversation with the suspected bot. "One of the things that the bots are able to leverage on Twitter is that the interactions are often very short. So it's easy for them to get by that way. But if you have an extended conversation with a bot, you may be able to tell that it's not responding in a human-like way or an intelligent way."

[REDACTED: Pithy closing joke about holding an extended, intelligent, human conversation with anyone on Twitter.]