Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5

Trust me, I’m a chatbot: Companies using them in customer services + Data privacy

#1
C C Offline
Data privacy -- are you sure you want a cookie?
https://www.eurekalert.org/pub_releases/...071521.php

INTRO: Data privacy is an important topic in the digitalised economy. Recent policy changes have aimed to strengthen users' control over their own data. Yet new research from Copenhagen Business School finds designers of cookie banners can affect users' privacy choices by manipulating the choice architecture and with simple changes can increase absolute consent by 17%... (MORE)


Trust me, I’m a chatbot: Companies using them in customer services
https://www.uni-goettingen.de/en/3240.html?id=6341

RELEASE: More and more companies are using chatbots in customer services. Due to advances in artificial intelligence and natural language processing, chatbots are often indistinguishable from humans when it comes to communication. But should companies let their customers know that they are communicating with machines and not with humans?

Researchers at the University of Göttingen investigated. Their research found that consumers tend to react negatively when they learn that the person they are talking to is, in fact, a chatbot. However, if the chatbot makes mistakes and cannot solve a customer’s problem, the disclosure triggers a positive reaction. The results of the study were published in the Journal of Service Management.

Previous studies have shown that consumers have a negative reaction when they learn that they are communicating with chatbots – it seems that consumers are inherently averse to the technology. In two experimental studies, the Göttingen University team investigated whether this is always the case. Each study had 200 participants, each of whom was put into the scenario where they had to contact their energy provider via online chat to update their address on their electricity contract following a move. In the chat, they encountered a chatbot – but only half of them were informed that they were chatting online with a non-human contact.

The first study investigated the impact of making this disclosure depending on how important the customer perceives the resolution of their service query to be. In a second study, the team investigated the impact of making this disclosure depending on whether the chatbot was able to resolve the customer's query or not. To investigate the effects, the team used statistical analyses such as covariance and mediation analysis.

The result: most noticeably, if service issues are perceived as particularly important or critical, there is a negative reaction when it is revealed that the conversation partner is a chatbot. This scenario weakens customer trust.

Interestingly, however, the results also show that disclosing that the contact was a chatbot leads to positive customer reactions in cases where the chatbot cannot resolve the customer's issue. "If their issue isn’t resolved, disclosing that they were talking with a chatbot, makes it easier for the consumer to understand the root cause of the error," says first author Nika Mozafari from the University of Göttingen. "A chatbot is more likely to be forgiven for making a mistake than a human." In this scenario, customer loyalty can even improve.
Reply


Possibly Related Threads…
Thread Author Replies Views Last Post
  Research Robot or human teachers: Which do children trust? + Super-AI sneak attack? New study C C 0 93 Dec 23, 2023 10:25 PM
Last Post: C C
  Research New research reveals alarming privacy and security threats in Smart Homes C C 0 84 Oct 26, 2023 10:48 PM
Last Post: C C
  Article Can a chatbot be aware that it’s not aware? C C 1 103 Oct 16, 2023 10:28 PM
Last Post: confused2
  Article Yes, physicists & engineers will have AI doing their thinking & designing for them C C 1 74 Apr 20, 2023 09:30 AM
Last Post: Kornee
  LaMDA chatbot is “sentient”: Google places engineer on leave after he makes the claim C C 1 99 Jun 14, 2022 12:47 AM
Last Post: stryder
  The new oracles & gods: When people trust computers more than other humans C C 0 120 Apr 14, 2021 07:08 PM
Last Post: C C
  Who is thinking about security & privacy for augmented reality? C C 5 855 Nov 11, 2017 03:39 AM
Last Post: RainbowUnicorn
  Peering in an AI's brain to help trust its decisions + Transistor behaves like neuron C C 0 544 Jul 3, 2017 06:17 PM
Last Post: C C



Users browsing this thread: 1 Guest(s)