The Consumer Financial Protection Bureau (CFPB) expressed concerns about the increased use of chatbots by banks to handle routine customer service requests in a report released on Tuesday. The regulator is worried that banks may reduce human customer service employees and push more routine tasks to AI. The poorly designed chatbots could violate federal laws that govern how debts are collected or personal information is used. Bank of America offers the largest and most successful financial chatbot under the name Erica, which now handles hundreds of millions of inquiries annually. Moreover, AI-like services are being introduced, such as JPMorgan Chase’s plans to use ChatGPT and artificial intelligence to help customers make appropriate investments. Bank of America bankers can use Erica to develop customer profiles and potentially recommend products to those customers. However, the CFPB has concerns about whether these chatbots will be able to interpret the complex nature of customer protection laws without giving inaccurate information.
Although these AI chatbots can save customers time waiting on hold to reach a human to ask a routine question, there are concerns expressed by the agency as to whether the chatbots will be able to handle the nuances of consumer protection laws without giving customers inaccurate information. The CFPB discovered that older or non-English-speaking customers may end up in a customer service “loop” and unable to reach a human representative. Director of the CFPB, Rohit Chopra, has repeatedly cautioned banks and companies that they could violate consumer laws if they poorly implement algorithms or artificial intelligence software. There have been reports of chatbots from Microsoft, Google, and other companies that have used biased language when prompted.