From automation and decision-making to fraud detection and customer experience, the applications of artificial intelligence in financial services seem endless. As companies, both large and small, navigate this evolving landscape and its plethora of vendors and solutions, many ask themselves: How will this technology be regulated once our legislative branch starts looking into it more seriously?
At Skit.ai, we organized a panel discussion hosted by our friends at Accounts Recovery with three renowned experts, to whom we asked the most pressing questions on AI in financial services and the regulatory environment. What regulations should we expect? More specifically, which aspects of AI will regulators be more interested in scrutinizing?
In this article, we’ll discuss the current role of AI in the financial sector — with particular attention to the accounts and receivables industry — and report some of the insights from the industry experts we interviewed during the event.
Understanding AI’s Impact on Financial Services
AI in financial services is not a prediction or a catchphrase. According to an international survey published in 2020 by the World Economic Forum and the Cambridge Centre of Alternative Finance, 85% of financial services providers already use AI in some form. Additionally, 77% of the responding institutions reported believing that AI would become essential to their business in the following two years. With the launch of ChatGPT in 2022, these numbers can only be higher now.
Some of the most notable applications of AI in the sector, according to Deloitte, are:
Conversational AI (such as chatbots and voicebots) for consumer interactions
Fraud detection and prevention
Customer relationship management
Predictive analytics
Credit risk management
The Regulatory Framework in the United States
Over the last few years, there have been efforts for legislators to study and regulate the use of AI in various industries, including the financial services industry. But while other foreign legislative bodies have been notably faster than the U.S. at passing timely legislation, there has yet to be a successful attempt at the federal level here in the United States.
In 2022, a bipartisan privacy bill, the American Data Privacy Protection Act (ADPPA) was introduced in Congress, but it did not make it through the Senate and has ever since been abandoned. Later in 2022, the White House published a policy document named the “Blueprint for an AI Bill of Rights,” seeking to provide guidance on the different rights that lawmakers should keep in mind when framing the discussion on the regulation of AI across industries.
In September, the U.S. Senate Committee on Banking, Housing, and Urban Affairs held a hearing about “Artificial Intelligence in Financial Services” to discuss AI’s applications, risks, and benefits in the industry.
The witnesses who spoke at the hearing were Melissa Koide of FinRegLab, who spoke about credit underwriting; Professor Michael Wellman of the University of Michigan, who raised concerns about algorithmic trading and market manipulation; and Daniel Gorfine of Gattaca Horizons, who focused on the opportunities presented by AI.
Most recently, the White House issued an executive order on artificial intelligence, establishing guidelines for AI safety and security. The order includes requirements that aim to protect consumers from threats to privacy, discrimination, and fraud.
Insights from the Experts: Possible U.S. Regulations of AI
The following quotes are excerpts from the webinar hosted by Accounts Recovery. Watch the recording to listen to the entire conversation and get the full context. The four experts who spoke are Dara Tarkowski of Actuate Law, Heath Morgan of Martin Golden Lyons Watts Morgan, Vaishali Rao of Hinshaw Culbertson, and Prateek Gupta of Skit.ai.
(Please note: The information provided in this article does not, and is not intended to, constitute legal advice; instead, all information is for general informational purposes only.)
Key Takeaway 1: Look at the European Union for Guidance
“The United States is pitifully far behind the EU, the UK, areas of APAC, and Australia in the way they’ve approached the technology and the utilization of the technology. If we want to see which direction our country will go in terms of AI regulations, we have a five-year playbook of what it looks like in the rest of the world.”
“What we’ve seen from the hearings that have been held in Congress; at its base, the concern by lawmakers and regulators and a lot of the practitioners, is that bad data leads to bad outcomes, which is selection bias. Then we’ve got process bias, which means that bad methods and bad processes lead to bad outcomes. Philosophically, those are the two issues that lawmakers are trying to address in whatever sector.”
“If you’re looking for guidance, put together a framework that is largely compliant with what the European Union has already laid out as the ethical and safe use of AI. In a global economy, it would be foolish of the United States to deviate too much from what the rest of the world is already adopting.”
Key Takeaway 2: This Is Not About Replacing People with Technology
“In our industry, the usage of these types of technologies is not and should not be to replace people or to replace the thoughtfulness and the consideration of the decisioning. However, a lot of these technologies can help speed up and improve our decisioning, so that people can make better and faster decisions, which is better for both businesses and consumers.”
Key Takeaway 3: AI Must Provide Value to Consumers
When it comes to the use of chatbots and voicebots, “you can’t keep consumers in an infinite loop with the artificial intelligence system and not let them talk to an actual human being whenever the AI is unable to provide a resolution. One of the focuses needs to make sure that AI provides value to the consumer, and is not used as a way for companies to create a hurdle between consumers and live agents.”
Key Takeaway 4: Waiting for Regulations May Not Be the Best Strategy
Should we wait for regulations before adopting AI solutions to avoid any risks? “You can’t bury your head in the sand and say: ‘We’re not going to deploy this technology until there are regulations.’ It really isn’t a question of whether you are going to adopt this technology—it’s a matter of when. The more you accept that and look into having risk assessments, an AI policy, and an AI committee, the better you’re going to be. The technology is coming to you through vendors and consumers before you know it.”
Key Takeaway 5: Set up an AI Task Force
“Set up an AI task force, so you can set up a framework on how to use AI properly.”
Want to learn more about Conversational Voice AI and how it can benefit your business? Use the chat tool below to schedule a free consultation with one of our experts!
State-level Regulations Are Just as Important as the Federal Ones
Virtually everyone working in the accounts and receivables industry is familiar with Reg F, the law passed in 2021 to update the Fair Debt Collections Practices Act (FDCPA). Reg F provides parameters for call frequency in debt collections; in particular, the 7x7x7 rule, which allows a maximum of 7 calls in a 7-day period, and allows the collector to follow up only 7 days after having had a conversation with the consumer.
However, some states have stricter laws when it comes to the debt collection industry and call frequency.
When training new agents or deploying a new software solution for your collection strategy, it’s important not to forget these state-level regulations, which are just as important as the federal ones.
Examples of State-specific Regulations for Collection Calls
Here are three examples of state-level regulations that limit call frequency permissions further than Reg F.
Massachusetts: According to the Attorney General’s regulations, creditors and collection agencies are allowed to make a maximum of 2 attempts of communication via telephone (calls or text) in a 7 consecutive day period.
New York: New York’s law is similar to Massachusetts’. Also here, collectors are not allowed more than 2 attempts of communication (calls, texts, letters, emails, etc.) in a 7-day period.
North Carolina: Collection agencies are allowed to make only 1 attempt of communication to a particular third party in a 7-day consecutive period to obtain location information.
How Skit.ai’s Compliance Filters Tackle State Regulations
Working with legal and compliance experts, at Skit.ai we’ve compiled the different state-level regulations and have integrated them into our Voice AI solution’s compliance filters.
Our solution identifies the state of the consumer through the zip code of their most recent address and identifies the applicable regulations in real-time during the campaign initiation process. This way, Skit.ai’s solution never dials out a non-compliant call to a consumer.
Want to learn more about how Conversational Voice AI can help you streamline your collection strategy and comply with all regulations? Schedule a call with one of our experts using the chat tool below.
Artificial Intelligence (AI) has disrupted almost every industry in one or the other way, and ARM industry is no exception. However, due to stringent regulatory restrictions in the industry, leaders are being cautious about implementing one.
This paper provides a compliance review of the Voice technologies, especially AI powered virtual/Digital Voice Agents. The intention is to briefly introduce the technology followed by statutory framework to analyze the technology for compliance.
I have attempted to provide the relevant cases to establish my point of view from legal perspective. Also included in the paper is the list of things one should consider before implementing such solution.
It’s imperative that ARM leaders should try to adopt such technologies, though cautiously, in order to stay in the business in this era of labor arbitrage, inflation, and a generational shift in communication preferences.
This whitepaper is written in collaboration with Skit.ai. Skit.ai is an Augmented Voice Intelligence Platform that helps businesses modernize their contact centers and customer experience by automating and improving voice communications at scale. Skit.ai is a vertical voice AI company, which means they bring deep domain expertise and business knowledge along with advanced technical know-how.
Skit.ai is the winner of CCW Excellence Award for Disruptive Technology of the Year 2022. Skit.ai was also named as a Cool Vendor in Gartner Cool Vendors in Conversational and NLT Widen Use Cases, Domain Knowledge and Dialect Support in the year 2021.
Disclaimer: The information in this whitepaper is not intended to be legal advice and may not be used as legal advice. Legal advice must be tailored to the specific circumstances of each case. Every effort has been made to assure that this information is up to date as of the date of publication. It is not intended to be a full and exhaustive explanation of the law in any area, nor should it be used to replace the advice of your own legal counsel.
Introduction of Technology and Background
As a 20-year veteran of the ARM industry, I have firsthand experience as a debt collection agency shareholder and general counsel, I am the named inventor of three patent or patent-pending products that are or will be available in this ARM industry, and I currently provide legal representation of various debt collection agencies for defense litigation, defense of regulatory investigations and preparedness efforts for compliance. In these multiple capacities, I often have and do continually find myself evaluating new and emerging technologies. The evaluation of these emerging technologies is often premised upon a regulatory or statutory challenge that is impeding the needs of the industry.
In this whitepaper, I will address an emerging technology that is fairly new to the ARM industry, Digital Virtual Agent (DVA) technology. DVA technology is not an entirely new concept for ARM as we have utilized Interactive Voice Response (IVR) technology for several years. Both DVA and IVR technologies have regulatory challenges under the Telephone Consumer Protection Act (TCPA), specifically the restrictions on prerecorded messages and artificial voice calls that are placed to cellular telephones. The TCPA statutory construction is predicated on extremely dated technology. Technology that at the time of the statutory construction of the TCPA was primarily focused on landline telephones, pagers, and per minute charges for long distance calls or per minute cellular telephone plans. While the statutory language is predicated on expired technology that in and of itself does not alleviate the requirement for compliance. Instead, it creates the requirement of necessary workarounds to ensure compliance with the statute.
“Both DVA and IVR technologies have regulatory challenges under the Telephone Consumer Protection Act (TCPA), specifically the restrictions on prerecorded messages and artificial voice calls that are placed to cellular telephones.”
Statutory Framework
47 U.S.C. § 227 (b)(1)(A) states, in relevant part, as follows: (1) PROHIBITIONS. It shall be unlawful for any person within the United States, or any person outside the United States if the recipient is within the United States – (A) to make a call (other than a call made for emergency purposes or made with the prior express consent of the called party) using and automatic telephone dialing system or an artificial voice or prerecorded voice – (iii) to any telephone number assigned to a paging service, cellular telephone service, specialized mobile radio service, or other radio common carrier service, or any service for which the called party is charged for the call.
The purpose of the TCPA at its inception was to avoid tying up emergency lines or saturating new and growing wireless networks. At that time, there were only about seven million cellular subscribers. Cellular telephone plans were expensive costing around $1 per minute of use. In seeking to curb “robocalls,” and the extreme costs associated with those calls to cellular telephones, Congress was attempting to regulate certain broadly placed calls that made use of a specific type of equipment that was prevalent in the 1990’s, with no connection or relationship between the calling party and the recipient. The recipient, if the call was answered, were charged significant fees for just answering the calls. Fast forward several years and the Courts interpreted these rules very broadly to include text messaging. The Courts’ interpretation brought into question communications, both calls and text messages, for prescriptions notifications, security alerts, collection calls, appointment reminders and the like. The Courts allowed the TCPA guiderails to stretch to businesses with existing business relationships which created class action liability in the tens to hundreds of millions of dollars in liability. In many situations, consumers not only desired these calls but were not negatively impacted by the restrictions as they failed to be informed of appointment or past due obligations.
Fast forward again to 2021 where the Supreme Court of the United States (SCOTUS) via Facebook, Inc. v. Duguid, 592 U.S. ___, 141 S. Ct. 1163, 209 L.Ed.2d 272 (2021)resolved a split amongst the circuit courts and rejected the expansive definition of an ATDS, provided primarily by the U.S. Court of Appeals for the Ninth Circuit. The Ninth Circuit rulings essentially considered every cellular telephone in America to have the capacity to become an ATDS. While there is still potential liability regarding calls without express prior consent or without human intervention, the courts have seen fewer cases than prior to the Facebook decision. What Facebook did not assist with is the use of prerecorded messages or calls that utilize artificial voice. Instead, the SCOTUS utilized a textualist review of the statute to determine that in order to qualify as an ATDS under the TCPA that a dialing system must have “capacity to randomly or sequentially store or dial phone numbers.” This completely changed the direction of several courts which were taking individual approaches to consider what constituted “capacity” and often focused specifically on the term “store” to create an expansive definition of an ATDS.
The Federal Communications Commission (FCC) is the agency empowered to issue rules and regulations regarding the TCPA. The TCPA prohibits calls to a cellular telephone using and “automatic telephone dialing system” or an “artificial or prerecorded voice” without first obtaining express consent from the called party. In the early 1990s when the TCPA was enacted, these restrictions were a response to a rise in consumer complaints regarding telemarketing calls placed by systems that utilized sequential number generation for outbound telephone calls that play pre-recorded messages.
Congress enacted the legislation and empowered the FCC to enforce it. The FCC through a handful of declaratory rulings in 2003, 2008 and 2015 expanded the definition of an ATDS under the TCPA to include all predicative dialers and essentially anything that had the capacity to predictively dial or dial from a list of stored telephone numbers.
In ACA International, Inc. v. FCC, 885 F.3d 687 (D.C. Cir. 2018), the D. C. Circuit court reviewed challenges to the FCC’s declaratory orders, namely:
Its definition of an ATDS;
The reasonableness of the one-call “safe harbor” for calls placed to reassigned numbers;
Revocation of consent; and
The FCC’s exemption for certain healthcare-related calls.
The D.C. Circuit court set aside the FCC’s interpretation of ATDS and the FCC’s treatment of reassigned numbers as a whole and upheld the FCC’s 2015 ruling that a called party may revoke consent at any time and through any reasonable means as well as the narrow exemption for certain healthcare-related debt. Thankfully, the Facebook decision years later provided additional clarity. However, that clarity has been again determined by federal district courts throughout the country with mixed results.
Courts have continued to reject what has been commonly referred to as the “Footnote 7” argument – referring to Footnote 7 in the Facebook decision, where SCOTUS suggested in dicta that randomly or sequentially selecting numbers from a predetermined list might qualify as an ATDS – where Courts are focused on the generation of the numbers and not the selection of the number. See generally, Laccinole v. Navient Solutions, LLC, 2022 WL 656167 (D. R.I. Mar. 4, 2022); Montanez v. Future Vision Brain Bank, LLC, 2021 WL 1697928 (D. Colo. Apr. 29, 2021); McEwen v. Nat’l Rifle Ass’n of Am., 2021 WL 5999274 (D. Me. Apr. 14, 2021); Carl v. First National Bank of Omaha, 2021 WL 2444162 (D. Me. June 15, 2021); Barton v. Temescal Wellness, LLC, 2021 WL 2143553 (D. Mass. May 26, 2021).
So, where are we today? We are still in a world where we need to consider capacity, storage of phones and other variables in how the equipment used may or may not be defined as an ATDS under the TCPA.
Interactive Voice Response (IVR)
IVR is an automated phone system technology that allows incoming callers to access information via a voice response system of prerecorded voice prompts and touch-tones without having to speak to an agent. It also provides menu options via touch-tone keypad selection or speech recognition to have a call routed to specific departments or specialists. IVRs provide communications with consumers through speech synthesis based on a series of voice prompts that are pre-programmed by the caller. IVR technology provides an efficient means for consumers to self-service an account, make a payment, identify that they are the correct or incorrect party called, request information, or identify information about their account in order to be routed to the proper call center agent.
IVR technology is useful when built to provide pre-determined, optional responses by consumers, like Press 1 to reach the receptionist, Press 2 to reach a manager, Press 3 to be removed from further call attempts from this company. It allows an outbound call to provide a series of options to a consumer to select from to route a call or provide consumers the ability to respond to a predetermined question and also allows companies to provide pre-recorded disclosures based on federal, state, or local laws, rules, or ordinances. IVR systems have provided for increased customer satisfaction rating and have improved contact center operations and key performance indicators. During peak times of high call volumes, an effective IVR system can reduce consumer wait times by utilizing self-services tasks to answer routine questions. IVR technology is also available any time of the day or night, depending on the consumers preference and work schedules it makes information readily available for consumers at times they otherwise would not be available.
Read this blog to know more about the difference between IVR and DVA.
Digital Voice Agent (DVA)
Unlike the IVR technology, DVA technology is not prerecorded or scripted messaging. It is highly conversational technology that has the ability to learn from each interaction. DVA is a software agent that can perform tasks or services for an individual based on commands or questions. The term “voicebot” is sometimes used in place of DVA. Voicebot or DVAs are able to interpret human speech and respond based on machine learning assessments and information that is accessible by the technology. DVAs have become very widely used and accepted by consumers. Many homes now utilize DVA styled technology through products like Google Assistant, Apple Siri, and Amazon’s Alexa. These products are able to verbally accept and respond to questions presented by humans. DVA’s are similarly being used in business applications as some consumers prefer to correspond with DVA rather than humans.
“Unlike the IVR technology, DVA technology is not prerecorded or scripted messaging. It is highly conversational technology that has the ability to learn from each interaction.”
The perceived values of DVA technology over human-to-human communication are:
Speed: Almost everyone has experienced long wait time while calling up a call or contact center mainly due to limilited support staff. On the other hand, unlike humans, machines can be scaled up almost instantly in case of surge in call volumes and can serve every caller without zero wait time. Moreover, machines can perform actions much faster than humans, for instance machine can look up for information from the database or send documents or raising downstream tickets faster than humans.
Convenience: Most DVA or voicebot technology is available for consumer use 24 hours a day, seven days a week. This allows consumers to communicate at a time and place that is convenient for consumers – which is also a requirement under federal statute for certain ARM industry activities. Additionally, machines can communicate with humans without violating laws and judging customers in their financial crisis.
The TCPA is the most significant compliance consideration for outbound dialing, including DVA technology. As previously discussed, the TCPA prohibits calls to cellular telephones using an ATDS or an “artificial or prerecorded voice,” without the prior express consent of the called party. 47 U.S.C. § 227(b)(1)(A)(iii). The TCPA also prohibits “using an artificial or prerecorded voice to deliver a message” to residential landlines. 47 U.S.C. § 227(b)(1)(B). However, commercial calls that do not constitute telemarketing or advertisements are exempt from the residential landline restrictions pursuant to FCC classification. 47 U.S.C. § 227(b)(2)(B); Rules and Regulations Implementing the Telephone Consumer Protection Act of 1991, CC Docket No. 92-90, Report and Order, 7 FCC Rcd. 8752, 8773 (1992). Thus, calls where there is an existing business relationship or where the call is not for telemarketing or advertisement purposes, then the TCPA restrictions are limited to calls to cellular telephones.
As previously discussed, Facebook provides a framework in which to consider risks related to outbound telephone calls. The DVA itself does not have the capacity to randomly or sequentially store or dial phone numbers and therefore does not constitute an ATDS. However, a DVA is not the system or mechanism generating the outbound call on its own. Thus, compliance considerations still require a review and of dialing technology to ensure the systems and infrastructure is not considered an ATDS or that the agency’sies policy has a solid consent defense, which we will discuss later. The other important analysis is that of artificial or prerecorded voice.
The TCPA does not provide a definition of artificial or prerecorded voice. One Court has utilized a dictionary definition of “prerecorded” meaning “recorded in advance.” See Bakov v. Consolidated World Travel, Inc., 2019 WL 6699188 (N.D. Ill., Dec. 9, 2019). Other Courts have reviewed pre-recorded considering the plain language and meaning but failed to provide a definition. See Lardner v. Diversified Consultants, Inc., 17 F.Supp.3d 1215 (S.D. Fla. 2014); Braver v. NorthStar Alarm Servs., LLC, 2019 WL 3028651 (W.D. Oka. July 16, 2019). The definition of artificial has received less attention from the plaintiff’s bar than prerecorded messages or ATDS allegations. Thus, we turn to the legislative intent and FCC interpretations. In 1993, just a few years after the enactment of the TCPA, the FCC commented on the purpose of prerecorded messages and defended the TCPA’s constitutionality. See Moser v. F.C.C., 1993 WL 13101270 (9th Cir. 1993). The FCC further identified that pre-recorded announcements are different from human interchange noting that machines cannot ascertain the propriety or proceeding with a message and that live calls include a dialogue rather than an announcement. Id. at *20.
“The TCPA does not provide a definition of artificial or prerecorded voice. One Court has utilized a dictionary definition of ‘prerecorded’ meaning ‘recorded in advance.’”
Pursuant to the legislative intent of Congress in drafting the TCPA and solving the problems associated with IVR outbound dialing in the early 1990’s and the fact that machine learning and artificial intelligence-based DVA technology developed post-enactment of the TCPA, Congress intent was not to ban “live” calls to consumers. Instead, Congressional intent shows that the TCPA was enacted to discontinue the use of artificial or pre-recorded voice systems that were considered a nuisance or invasion of privacy by forcing consumers through a decision tree-based script which often led consumers in never-ending systemic circles. DVA technology is uniquely distinguished from the IVR counterparts as it utilizes machine learning and artificial intelligence that allows the system to make decisions based on voice recognition and continually learn from ongoing interactions.
Artificial Intelligence (AI) is essentially human-like learning and communication behaviors by a machine or system. AI is classified in two categories, functionality, and capability. Functionally AI has no memory and does not have the ability to learn from past actions. Capability, by adding memory and past information, AI can make better decisions. Items like GPS location apps are good examples for AI.
Machine learning is when software is able to successfully predict and react to unfolding scenarios based on previous outcomes. The systems develop patterns, predict, and learn based on disposition data. It can make adjustments to prior outputs without being programmed to do so, therefore is truly more interactive.
Understanding the technological differences between an outbound IVR and DVA are significantly important to the assessment of risk in the use of this new technology in the ARM space. As previously mentioned, there are significant differences in these two types of technologies. The main difference in evaluating risk is what constitutes a pre-recorded message and what constitutes an artificial voice. Let’s begin with pre-recorded messages. Congressional intent on a pre-recorded message was simple – to avoid calls to a consumer’s cellular telephone and leaving a pre-recorded message on a consumer’s voice mail. The DVA does not utilize pre-recorded messages for voice mail messaging, nor does it play a series of pre-recorded scripts to provide consumers with options to select one of several predetermined paths to additional questions. DVA technology corresponds with a consumer based on that consumer’s statements, questions, or responses. It then utilizes real-time machine learning to determine how to respond to the consumer’s inquiry, no different than a human agent in a call center. Systems, like people, will sometimes be unable to provide an answer to a specific question and may transfer the call to another person or system to respond accordingly in an attempt to provide the requested service to the consumer. The ability of the DVA system to interact with a human in real-time and correlate responses to questions posed by a consumer and not pre-disposed by the company, is a major differentiation in the technology and the applicability of statutory constraints provided in the TPCA.
In the event your risk analysis, in-part due to the limited judicial interpretation of the use of DVA technology, does not meet your risk tolerance levels, don’t stop there. Take a look at other compliance considerations, like prior express consent, human intervention or peer-to-peer solutions that may assist in the analysis to find a compliance strategy to comply with the TCPA. Prior express consent is a viable and solid defense to an alleged violation of the TCPA for the use of outbound DVA or IVR technology.
While pre-recorded messaging and artificial voice are differentiated from ATDS analysis in the TCPA, prior express consent (PEC) is an absolute defense to all three arguments. The phrase “prior express consent” is not defined under the TCPA or FCC regulations. However, in 1992, the FCC addressed the issue of PEC in the context of calling wireless number by stating:
Persons who knowingly release their phone numbers to a caller have in effect given their invitation or permission to be called at the number which they have given, absent instructions to the contrary. However, if a caller’s number is “captured” by a caller ID or an ANI device without notice to the residential telephone subscriber, the caller cannot be considered to have given an invitation or permission to receive autodialer or prerecorded voice message calls.
A process analysis is necessary to ensure that the prior express consent is documented properly, saved, and accessible in the event your company is subject to a TCPA lawsuit or regulatory review. A few considerations for analyzing your prior express consent policy:
Prior express consent can be obtained either directly from the consumer by the agency or passed through to the agency from the creditor. Furthermore, the 9th Circuit Court of Appeals states in summation that the TCPA requires “that prior express consent must have been given either orally or in writing.” Loyhayem v. Fraser Financial and Insurance Services, Inc., No. 2:20-cv-00894-MWF-JEM (9th Cir. 2021). While pass-through consent has been determined by various courts to pass muster under the TCPA, direct consent is obviously better for several reasons.
First, pass-through consent is obtained at the time of service. Depending on the type of service provided a consumer could argue that the consent was obtained under duress or that the required provisions made the agreement a contract of adhesion, meaning the consumer had no choice but to accept the terms without condition. Depending on the type of service obtained those arguments may have merit.
Second, the language in the underlying consumer agreements must contain the proper consents (e.g., calls initiated through an automated telephone dialing system; calls initiated with a pre-recorded or artificial voice). Agencies may be able to influence the creditors consent language in the underlying consumer agreement, but they do not control language modifications over the life of the creditor-consumer relationship which often change through electronic updates to consumer agreements.
Third, underlying consumer agreements are stored and maintained by the creditor. As a third-party provider of collection or care services, the agency is dependent not only on the underlying consumer consent language but also the manner in which the creditor stores, maintains, and reproduces the documentation to prove the consumer provided prior express consent. Fourth, considerations of time between the contractual agreement and the outbound attempt are important as reassigned numbers have created liability for outbound calls. See Soppet v. Enhanced Recovery Co, LLC, 679 F.3d 637 (7th Cir. 2012).
“Consumers should be offered a simple and clear way to revoke consent or opt-out of future IVR or DVA based call attempts.”
The Consumer Financial Protection Bureau (CFPB) addressed this concern around time in Regulation F, 12 CFR Part 1006 (November 31, 2020). Under Regulation F, it identified concerns regarding reassigned phone numbers in their recommendations regarding text message communications suggesting that agencies consider the use of the Federal Trade Commissions (FTC) reassigned number database scrub anytime more than 60 days has elapsed since the last contact with a consumer. See Regulation F, Section 1006.6(d)(5). While Reg F provides this recommendation and offer for a safe harbor to agencies that follow the recommendation, it is not a rule or requirement but instead an optional compliance consideration that affords a safe harbor or bona fide error defense under the FDCPA. Scrub processes for reassigned phone numbers likely will provide another layer of defense and compliance to an outbound DVA or IVR based call. Finally, consumers should be afforded a simple and clear way to revoke consent or opt-out of future IVR or DVA based call attempts. One way to provide the consumer an opt-out option is to provide it in the IVR or DVA, on a consumer facing website, or via toll free phone number. Agencies should have systemic processes that are regularly audited to consumer revocations where consent is the only or primary defense.
Around 2008-2010, the ARM industry experienced a rash of TCPA litigation claims premised on the use of an ATDS for outbound calls. Technology was developed to overcome these restrictions that required human-intervention prior to an outbound dial regardless of the type of dialing systems that was utilized by the calling party. [Insert MCA Patent reference]. This technology was designed to allow for continued efficiencies in call center routing, reporting and compliance which was built into dialing systems over the prior years.
The ARM industry commonly has and continues to lean on dialing system infrastructure to maintain a rules-based process to ensure compliance with various state and federal regulations that restrict the time and number of calls to consumers in a given period of time. Human intervention is conceptually very simple – it requires a human to click a button or dial the ten key number prior to call initiation from the dialing system. The first case to address this technology was Strauss v. CBE Grp., Inc., 2016 U.S. Dist. LEXIS 45085 (S.D. Fla. Mar. 28, 2016) ruling that human intervention, such as through the click of a button, the system does not qualify as an autodialer under the TCPA. This instrumental rule blazed the litigation trail and provided a path to the 10+ additional federal court rulings throughout the country. This additional compliance consideration provides protection against the allegation of the illegal use of an ATDS. However, it does not provide any real protection for (or at least has not been addressed by the Courts) in regard to pre-recorded or artificial voice allegations.
“Once consent is obtained (either direct consent or pass-through consent), then the outbound DVA option to communicate with consumers become extremely viable and significantly defensible.”
Peer-to-Peer (P2P) technology is very similar to human-intervention technology and is primarily used today in outbound text messaging campaigns. P2P is similar to human-intervention prior to an outbound phone call with a few differentiations. First, P2P was built for outbound text messaging but could be deployed on outbound phone call attempts as well – although I have not seen the technology utilized in this manner, yet.
Human-intervention for outbound dialing is customarily provided by a human in a call center environment and oftentimes is generated off-shore. The human intervenes in the dialing system to generate the outbound dial by pushing a button (key on a keyboard, computer mouse, or similar device). P2P processes are built into the texting process and its application based. The application sends a message to employees or contractors with a message to log into the P2P system. The employee or contractor then presses a button on their cellular device, iPad, computer, or other electronic device to initiate the text to be sent to the carrier or aggregator for delivery to the consumer.
The efficiency of P2P is better than human-intervention and has also been reviewed by the FCC, wherein the FCC opined that a P2P process is not an ATDS. See FCC Declaratory Ruling, In the Matter of Rules and Regulations Implementing the Telephone Consumer Protection Act of 1991, P2P Alliance Petition for Clarification, GC Docket No. 02-278, DA 20-670 (June 25, 2020). P2P provides a very solid defense to an allegation of that an ATDS was illegally used to generate the outbound call or text message but like human intervention has not been vetted by the Courts as it relates to artificial or prerecording voice allegations.
So, why even discuss P2P for IVR or DVA technology if it does not protect against the TCPA concerns regarding artificial or prerecorded voice? Consent. P2P technology provides ARM companies an affordable and effective means in obtaining direct consumer consent prior to the use of DVA or IVR technologies. Companies today have the ability to send documents to consumers via text messages through P2P platforms that can be signed electronically and returned to the sender for consent capture. Once consent is obtained (either direct consent or pass-through consent), then the outbound DVA and IVR options to communicate with consumers become extremely viable and significantly defensible. Consent is a must for a pre-recorded message defense and human intervention/P2P is a must for defense against an ATDS allegation.
Conclusion
As previously addressed, there are several different compliance considerations to evaluate and determine what to deploy as internal business strategies and to document in policies and procedures. The type of systems used drives the risk analysis. Individual risk tolerance will determine which compliance considerations to implement or maybe how to develop multiple layers of protection utilizing some, all or more options than those compliance considerations discussed in this article. What I believe to be true is doing nothing and not utilizing these technologies is not a good long-term plan or option.
As an industry and in society in general, we continue to experience a shift in communication modalities in society. We are also facing labor arbitrage, inflation, and a generational shift in communication preferences. At the end of the day, finding a preferred communication method is key to right party contacts and a right party contact is the key to success in the ARM industry. Thus, ARM companies must consider generational shifts, labor arbitrage and current financial impacts.
The current workforce and debtor community is comprised primarily of Millennials and Generation Z. These generations communicate primarily by electronic means and prefer to speak to a computer or artificial voice rather than a human call center agent.
Furthermore, we are experiencing inflationary pressures and worker shortages, due its large part to the retirement of the baby boomers, which continues to drive hourly rates to unprecedented, and, at some point in the near future, to unmanageable levels at current fee rates.
Over the years, the ARM industry solved or attempted to solve the labor concerns with near-shore or off-shore solutions that again utilize human’s doing similar work but at lower labor rates and lower overall costs. While labor concerns continue to exist and likely always will, the more impactful change is the shift in consumer communication preferences. You can staff call centers all day long, but if people will not pick up the phone and communicate with a live representative then the model has to change.
The technological advancements and shift in communications preferences make the utilization of this technology paramount to the continued success of any ARM business into the future.
Owing to far-reaching repercussions, compliance management has become an issue of gravitas. It’s a challenge of change. Often, frequent regulatory changes create ambiguity for collection agencies. For instance, Regulation F of the Consumer Financial Protection Bureau (CFPB) came into effect on November 30, 2021, and is the most significant debt collection rulemaking. Any creditor–either the original issuer or a debt buyer–faces challenges in responding to it. And even more tedious is training and retraining agents, reiterative setting up processes and tools to meet regulatory requirements.
When it comes to compliance, the devil is in the details. A human agent under varying stress and performance pressure is prone to make mistakes. But even an innocuous breach of compliance results in hefty fines and penalties. Even without state or local mandates around debt collection practices, federal regulations must be followed to avoid penalties or lawsuits from consumers or enforcers. CFPB levied $1.7 billion in civil penalties and over $14.4 billion in relief for American consumers in the last ten years. Compliance has thus evolved as a significant pain point for debt collections agencies.
We have reached a point where compliance is not just an expense item but also a source of differentiation for collection agencies. Unsurprisingly, most debt collection agencies are looking for tech solutions that can help them be more agile and efficient. Voice AI is one emerging solution with the most disruptive potential and growing use cases.
Too Many Calls, Too Little Communication
One of the prime objectives of compliance is to protect the customer from unfair practices and harassment. CFPB bases much of its enforcement authority on the concept of UDAAP (unfair, deceptive, and abusive acts or practices).
A call at the right time, to the right person, and with the right message can achieve the 3 Cs of debt collection: Cost, Compliance, and Customer Experience. A human agent may struggle to accomplish the triad, making too many or too few calls, but it’s a cakewalk for an intelligent voice agent.
The formal, statutory fees and levies, which are increasingly hefty, represent just the tip of the compliance cost iceberg (around 10%) of total regulatory costs. The broader cost of compliance is much bigger, making it a formidable force.
Here are the common challenges faced by debt collection agencies today:
Ever-Expanding List of Laws: Fair Debt Collection Practices Act (FDCPA), Telephone Consumer Protection Act (TCPA), Federal Fair Credit Reporting Act (FCRA), Payment Card Industry compliance (PCI), and Health Insurance Portability and Accountability Act (HIPAA) are a part of a growing list of regulations, adherence to which is a core driver to the success of debt collection agencies and similar financial institutions.
High Cost of Continual Training and Vigilance Process: A survey of sector firms by the Credit Services Association (CSA) reveals that in staffing terms, the proportion of resources involved (in compliance) seems to trend generally between 15% and 25% of total resources. That is a significant percentage and an opportunity to cut down the cost.
Client Expectation and Audit Requirements: Clients of collections agencies are deeply wary of meeting compliance and exert pressure, even more than regulators, to comply. As per a report by CFPB, collection agencies with large clients face 17 audits in a year. That’s an average of 3 audits every 2 months. The lack of transparency between debt collectors and consumers makes it difficult for agencies to facilitate these audits effectively. It is a formidable challenge to meet such high expectations cost-effectively.
Insufficient Time to Design and Implement Compliance Effectively: A rapid and frequent change in regulation leads to collection agencies running from pillar to post to update their processes. Deploying AI-enabled voice agents can minimize the training and guidance cost.
High Cost of Not Meeting the Compliance Requirements: Failing to meet the compliance requirement has, in the past, led to grave heavy consequences. Encore and Portfolio Recovery Associates, two giants in bad debt collections, were fined $18 million in 2015. They were forced to refund or halt collection of over $160 million in consumer debts. Violating the Do Not Call registry can cost agencies anywhere between $500-$1500 per case, as per TCPA. Moreover, razor-thin margins make the total cost of attorney fees, settlement costs, and the opportunity cost of time too much for agencies to bear.
Voice AI and its Ability to Empower Collection Companies Manage Compliance
More often than not, compliance is a matter of adhering to protocols and procedures. AI-enabled digital voice agents that can religiously follow a given set of instructions prove far superior in adherence to the regulatory framework.
There are numerous instances where small mistakes land collection agencies in trouble. Here are some simple yet powerful examples of how Voice AI can help with compliances:
Honoring Do Not Call Registry and Data Scrubbing: The telephone Consumer Protection Act (TCPA) maintains a register of subscribers who do not want to be called for telemarketing calls and automated dialer calls unless you have consent to do so otherwise. It’s essential to scrub the data before dialing these contacts and check for permission. Solution is to scrub the data against certain database such as Do-not-call registries (external and internal), consumers represented by attorneys and debt settlement companies, deceased consumers, serial litigators, bankrupt consumers, cease-and-desist order consumers. Unlike human agents, who can fumble, digital voice agents perform this with the help of APIs in a fraction of a second.
Calling Within Permissible Hours: FDCPA does not allow collection agencies to contact customers outside of 8:00 a.m. to 9:00 p.m. local time unless the consumer has given explicit consent. Additionally, customers with night jobs may not wish to be contacted during the day. Such personalization in large portfolios prove to be a daunting task for a human agent but an effortless one for a digital voice agent.
Calling Frequency: Regulation F of CFPB limits the frequency of calls under the 7/7/7 rule, restricting the agencies from attempting to establish communication with their consumers for more than 7 times in 7 days. The 7/7/7 rule includes voicemail, unanswered calls, and messages left on the consumer’s phone, and excludes email and text messaging. Furthermore, agencies cannot try to establish contact in the next 7 days after a successful communication. It’s taxing for human agents to consistently follow these rules for the entire customer base while optimizing time and cost at the same time. On the other hand, configuring machines to follow all these rules is possible with a click.
Mini-Miranda is mandatory as per FDCPA in the first communication in any channel. Digital voice agents never fail to comply with such regulatory requirements.
Failure to Discontinue Communication Upon Request: Communicating with consumers in any way (other than litigation) after receiving notice with certain exceptions can lead to lawsuits. Machines follow strict protocols and comply with the request submitted by the consumers.
Communicating with Consumers at Their Place of Employment: It’s illegal to contact the consumer after being advised that this is unacceptable or prohibited by the employer. Human agents under dier conditions fail to honor guidelines. On the other hand, since machines reachout at the right time and frequency have high conversion rate while meeting compliance.
Contacting a consumer represented by an attorney: Agents must not contact the consumers who have chosen not to be contacted by agencies and have signed up attorneys for communication with certain exceptions.
Communicating with a Consumer During Validation Period: Human agents can make a mistake and try to establish communication with the consumer or pursue collection efforts after receiving a request for verification of a debt made within the 30-day validation period. On the other hand, Digital Voice Agents are configured to not engage in any such activities and trigger the automatic collection calls once validation period is over.
Misrepresentation & Threatening Arrest or Legal Action: With variable incentive as a major wage component, it’s quite common for debt collectors to misrepresent as attorney or law enforcement officer. FDCPA prevents such kind of misrepresentation and has punitive enforcement directives. Digital voice agents follow strict protocol and never succumb to such malpractices.
The abusive or Profane Language used during communication related to the debt is prohibited. Digital voice agents never fall back to such practices in order to achieve the results.
Communication with Third Parties: revealing or discussing the nature of debts with third parties (other than the spouse or attorney) is prohibited except to know the location of the debtor without mentioning debt related information. Intelligent Voice Agents can confirm the right party before giving out any information.
Raise a Dispute: Voicebot can also help consumers raise a dispute over a call and tag it in the CRM so that the relevant team can pick it up.
Validation: Upon asking for validation information, the voice bot can immediately send the electronic copy of the validation notice and mark the contact with a relevant tag so that human agents can see the status, and neither the voicebot nor human agents try to communicate to the consumer for the next 30 days.
Raise Tickets: Voicebot can even raise tickets to send the physical copies of the validation notice if explicitly requested by the consumer.
With Distinct Advantages, Voice AI Will Play a Bigger Role in Compliance Management
Apart from numerous other use cases, the utility of Intelligent voice agents in improving the compliance of debt collections agencies is fast emerging and very promising.
Apart from the direct costs of compliance, indirect costs such as fines and penalties take a heavy toll on companies. Today, compliance has become more than an expense but a source of differentiation. Many companies have already begun adopting Voice AI, and its ever-expanding use cases will help them create a distinct competitive advantage.
For more information and free consultation, let’s connect over a quick call; Book Now!