AI Communication and Technology Trust
People Trust Human Journalists Over Algorithms
News organizations are starting to experiment with automated content. Research suggests readers perceive news written by algorithms as less credible than if it’s written by a human. However, this perception can be reduced by exposure to robots in popular culture. That’s the finding of two studies examining the impact of automated authorship on people’s perception of article credibility based on a perception of who, or what, wrote them.
“Although machine-based automation is commonplace and accepted in fields such as automobile production or clothes manufacturing, it remains relatively novel and unexpected in a domain such as news production,” wrote Frank Waddell, assistant professor in the Department of Journalism, in a paper published in Digital Journalism.
For the first study, Waddell recruited 129 participants from a crowdsourcing website. They read a four-paragraph, news article which was current at the time of the study in 2016, that gave the results of a poll that showed Hillary Clinton and Donald Trump as the frontrunners for their respective parties in the Virginia presidential primary. People were randomly assigned to read the article with one of two bylines: “Kelly Richards, Reporter,” or “Automated Insights, Robot Reporter.” Both articles were identical except for the perceived authorship. After reading the article, they responded to a series of questions designed to see if they recalled the byline and evaluating whether they found the article to be accurate, authentic, believable, high-quality, newsworthy and representative.
The results showed that readers rated news generated by machines as less credible, less newsworthy and of lower quality than news written by a human journalist. This could be due to high expectations that readers hold toward new forms of technology, leading to what researchers describe as “expectancy violations,” where one’s prior expectations for a new technology is not fulfilled by its actual abilities or functions.
“The expectation that automated news is ‘objective’ or ‘error-free’ is a high standard for automation to obtain, thus leading to a situations likely to foster negative expectancy violations,” he wrote.
Why are expectations of automated journalism so high? One possible explanation is that seeing an article purportedly authored by a machine may cause the reader to activate the “rule of thumb” that suggests that if the article is written by a machine, it must be free of bias. In this case, the mental shortcut could heighten readers’ expectations of the news and trigger the expectancy violation.
Another source of this bias against machines could be low anthropomorphism, or the tendency to ascribe human qualities to non-humans. “Entities that are perceived as less human-like are less likely to be perceived as capable of completing human tasks,” Waddell wrote.
Finally, Waddell wanted to see how participants’ prior exposure to emerging technology might affect their thoughts on the credibility of robot reporters. A 2016 large-scale survey by Waddell and his colleagues showed that adults who remembered a robot from a past film were less anxious about robotics and more likely to express interest in using robotics, and Waddell wanted to see if exposure influenced the readers.
In the second study, 182 people were given a questionnaire that, among other things, asked about their knowledge of and exposure to robots and other forms of artificial intelligence. They then read an article with either the byline of a human journalist or a robot journalist. Then they answered questions about whether the source met their prior expectations, the perceived credibility of the article, and how human-like the source was.
The second study found that the bias against machine reporters seems to operate through source anthropomorphism; in other words, people don’t trust machines to do as good a job at journalism as humans.
“It appears that news writing is still largely perceived as ‘a human’s job,’” Waddell wrote. “News readers appear to prefer journalists who are similar to the self, even when the level of identification is merely an affinity based on possessing human-like (rather than machine-like) appearance.”
The second study also found that exposure to robots in popular culture, such as movies or videos, led readers to evaluate robot reporters more positively than their counterparts who weren’t familiar with robots. “Popular media is therefore likely to play a prominent role in shaping how robotics are evaluated in other domains,” such as news reporting, Waddell wrote.
In the meantime, news outlets that use machines to gather data for stories could provide their readers with information on the process of producing the news story, which may help them acclimate to the idea of automated journalism. Also, crediting human journalists and automated sources together may help lessen the negative sentiment towards machine-generated news content, which opens up another potential area of future study, Waddell said.
The study focused on news stories attributed to a machine or a human journalist and provides important but limited information, since currently human journalists still have a role in automated news both in the creation of the algorithms that inform the production of news as well as editing the automated content. Also, the surveys focused on source attribution, while studying stories actually generated by a human or a machine could have a different result.
“Future research could build upon the results offered by the present investigations through examining the effects of news attributed both to human and robotic agents, while also testing the interaction between news attributed to machines with news actually generated in part by automation,” he wrote.
Posted: June 26, 2018
Tagged as: AIatUF, automated content, Department of Journalism, Frank Waddell, Journalism, journalism research, UF Research