How Trust, Privacy Affect Human/AI Applications
Trust in artificial intelligence (AI) and how that translates to media management and consumer strategy is driving Sylvia Chan-Olmsted’s studies.
“Trust is an essential issue,” said Chan-Olmsted, a professor in the Department of Telecommunication and director of Media Consumer Research. “We are studying human/AI collaboration that is anchored on trust to see how various human/machine factors would affect trust, and how trust moderates the outcome.”
As part of her research, she’s examining consumer intention to use AI-enhanced applications or devices and the interaction process through the lens of trust. For example, she looks at factors like:
- What type of consumer segments? “Those who are much more positive are probably excited about innovation. Some are afraid of it and tend to be more pessimistic. Some have more of a propensity to trust.” Included in this are things like confidence in one’s technological abilities.
- What type of machine? What is its competency level and machine “humanness”? “How people perceive the AI, including its intention, will also affect trust.”
She is conducting various national surveys on these topics. “We’re doing studies about smart devices at home like smart speakers,” she said, to try to understand how AI can enter the homes of humans in a more trustworthy way.
From the consumer angle, Chan-Olmsted identified privacy as a big part of the artificial intelligence puzzle. “We need to know more about how people want to interact with AI and where privacy fits in when they try to balance between privacy and personalization.”
“It’s a trust issue. Everybody hates it when you do one search and there’s this algorithm that fuels your Facebook feed. That’s why segmentation is so important: Certain people don’t care; certain people get really turned off by that.”
Another area of Chan-Olmsted’s humans-AI interaction studies is about improving decision-making in media companies. “AI is here. At this moment, it’s more automation, it’s not as intelligent. But there will be more and more such applications. How can we use AI in a way that is most beneficial for media organizations and maximizes its intelligence? Again, trust will drive such outcomes.”
Researcher want to know what kind of functions should be allocated to A1? “How should we make such decisions? Who should be making these decisions?” There are many issues in confidence, change and control when it comes to managing AI in the industry.
For example, Netflix’s data team and its Hollywood content team were at odds regarding data vs. human judgment when Netflix tried to decide what to focus its marketing on for the “Grace and Frankie” series, which stars Jane Fonda and Lily Tomlin. The data group argued that promotions without an image Fonda performed better than those with both stars. But the company’s content executives worried about angering Fonda and possible contract violations. In the end, the content group prevailed.
Chan-Olmsted wants to learn more about using a collaborative approach to balance authority and enable mutual learning while building trust over time as AI is gradually adopted in the media industry. “We have to look at AI as a teammate that’s more than a tool.”
Tagged as: AIatUF, Artificial Intelligence, Sylvia Chan-Olmsted, Trust