GenAI compliance is an oxymoron. Ways to make the best of it

One of the biggest challenges CIOs face today is reconciling the constant pressure to deploy generative AI tools with the need to keep their organizations in compliance with regional, national, and often international regulations.

The heart of the problem is a contradiction deeply embedded into the very nature of generative AI systems. Unlike the software that most IT workers have trained on for decades, genAI is predictive, trying to guess the next logical step. 

If someone doesn’t write very explicit and prescriptive limitations about how to deal with the assigned problem, genAI will try to figure it out on its own based on the data it’s been exposed to.

The classic example of how this can go wrong is when HR feeds a genAI tool a massive number of job applications and asks it to evaluate the five candidates whose background most closely resembles the job description. In this example, genAI analyzes all current employees; finds patterns in age, gender, and demographics; and extrapolates that that must be the kind of applicant the enterprise wants. It’s then only a short walk to regulators accusing the enterprise of age, racial, and gender discrimination.

Confoundingly, genAI software sometimes does things that neither the enterprise nor the AI vendor told it to do. Whether that’s making things up (a.k.a. hallucinating), observing patterns no one asked it to look for, or digging up nuggets of highly sensitive data, it spells nightmares for CIOs.

This is especially true when it comes to regulations around data collection and protection. How can CIOs accurately and completely tell customers what data is being collected about them and how it is being used when the CIO often doesn’t know exactly what a genAI tool is doing? What if the licensed genAI algorithm chooses to share some of that ultra-sensitive data with its AI vendor parent? 

“With genAI, the CIO is consciously taking an enormous risk, whether that is legal risk or privacy policy risks. It could result in a variety of outcomes that are unpredictable,” said Tony Fernandes, founder and CEO of user experience agency UEGroup.

“If a person chooses not to disclose race, for example, but an AI is able to infer it and the company starts marketing on that basis, have they violated the privacy policy? That’s a big question that will probably need to be settled in court,” he said.

The company doesn’t even need to use those details in its marketing to get into compliance trouble. What if the system records the inferred data in the user’s CRM profile? What if that data is stolen during an attack and gets posted somewhere on the dark web? How will the customer react? How will regulators react?

Ignorance of the law (or the AI) is no excuse

Complicating the compliance issue is that there is not merely a long list of global privacy regulations for CIOs to grapple with (the most well-known of which is the EU’s GDPR), but also a ton of new AI regulations on the books or in the works, including the EU AI Act, bills under consideration in multiple US states, the White House’s Blueprint for an AI Bill of Rights, Japan’s National AI Strategy, various frameworks and proposals in Australia, the Digital India Act, New Zealand’s Algorithm Charter, and many more.

Companies must plan how they’ll comply with these and other emerging regulations — a task that becomes infinitely harder when the software they’re using is a black box.

“Enterprise CIOs and their corporate counsels are right to be nervous about genAI because, yes, they cannot truly validate or disclose the information being used to make decisions. They need to think about AI differently than other forms of data-driven tech,” said Gemma Galdón-Clavell, an advisor to the United Nations and EU on applied ethics and responsible AI, as well as founder and CEO of AI auditing firm Eticas.AI.

“When it comes to AI, transparency around information sources is not only impossible, it’s also beside the point. What’s important is not just the data going in, but the results coming out,” Galdón-Clavell said. CIOs must get comfortable with a greater lack of visibility in genAI than they would accept almost anywhere else, she said. 

It’s precisely that absence of transparency that concerns Jana Farmer, a partner at the Wilson Elser law firm. She sees a big legal problem in the lack of comprehensive and detailed information enterprises get from the AI vendors from whom they license the genAI software. Her worries go beyond the limited information about how the models are trained.

“Do we want to play with a system when we don’t know where it keeps its brain?” she asked. “When you look at the emerging regulations, they are basically saying that if you deploy [AI], you are responsible for what it does, even if it disobeys you.”

Enterprises are already being sued over their use of genAI. Patagonia recently got hit with a customer lawsuit alleging that the retailer did not disclose that genAI was listening in on customer calls, collecting and analyzing data from those calls, and storing the data on the servers of its third-party contact center software vendor. It’s unclear whether Patagonia knew everything the genAI program was doing, but ignorance is no excuse. 

The emerging rules around AI adopt the legal concept of strict liability, Farmer said. “Let’s say that you own a train company and it uses dangerous machinery. You have to make that thing safe. If you don’t, it doesn’t matter that you tried your best. Saying ‘we tested it every which way and it never did it before’ won’t satisfy regulators,” she said, adding that CIOs must perform extensive and realistic due diligence.

“If you have not done [realistic due diligence], the answer ‘I didn’t know that it would do that’ doesn’t do you much good,” Farmer said.

Indemnification is not the (full) answer

Farmer said that she has seen various businesses trying to contractually remove their liability by asking the AI vendor to indemnify them against costs or legal issues arising from their use of genAI tools. It often doesn’t help nearly as much as the enterprise executives hope it will.

She said that the AI vendor will usually stipulate that they cover all costs only if they are found to have been negligent by a recognized court or regulatory body. “If we and only if we are found to have been negligent, we will indemnify you later on,” she said, paraphrasing the AI vendor.

This brings the enterprise back to awareness of exactly what the genAI program is doing, what data it is examining, what it will be doing with its analysis, and so on. For many different reasons, Farmer said, executives often do not know what they need to know.

“The issue is not that nobody in the organization knows what data is being processed, but that understanding information practices is a ‘whole business’ issue, and the various departments or stakeholders are not communicating,” she said. “Marketing may not know what technologies IT has implemented, IT may not know what analytics vendors Marketing engaged and why, etc. Not knowing the information that privacy laws require to be disclosed is not an acceptable response.”

This then gets far tricker when genAI tries to extrapolate insights from data.

“If an AI system can make inferences from existing data, that needs to be transparently disclosed, and the standards are usually those of reasonableness and foreseeability. Deployers of genAI should make transparent disclosures to consumers that are interacting with AI — what data the system was trained on and has access to — and advise of the system’s limitations,” Farmer said.

UEGroup’s Fernandes noted that an AI’s inferences may simply be wrong, citing an example from his own life: “I get Spanish-language stuff served to me, but I don’t know a lick of Spanish. In fact, my surname is Portuguese, but to Americans, it is all the same.” Because of Portugal’s colonial past, some people in Brazil and India share his surname, so he receives ads targeted to those nationalities as well.

“There is too much nuance and context in the human condition for the algorithm writers to understand all of human history and assign accurate probabilities,” he said. “[AI] can be so damn wrong for so many reasons. At the end of the day, it is an imperfect manmade thing that embodies the biases of the programmer and data, he said.

Given its risks, genAI isn’t right for every situation, attorney Farmer noted. “Depending on the use case and the risk assessment, the question may be whether the organization should be deploying an AI system in the first place. For example, if a genAI model is used in decision making in connection with education, employment, financial, insurance, legal, etc., those are likely going to be high risk, and the risks/compliance requirements may outweigh the benefits,” she said.

Fernandes agrees. In fact, he questions whether any organization should be deploying genAI today, given the opaque nature of the technology.

“Does it make sense to deploy software to fly a plane that will act in ways that you cannot anticipate? Would you put your child or grandchild into an autonomous vehicle alone, where the actions the software takes cannot be anticipated?” he asked. “If the answer is ‘no,’ then why would any CIO in their right mind do that with a piece of software that may put their entire organization at risk?”

4 techniques for addressing genAI compliance risk

For lower-risk scenarios (or when “just say no” isn’t an option), doing some hard prep work can help protect organizations against the legal risks associated with genAI.

Shield sensitive data from genAI

IT has to be careful to put limitations on what genAI can access. Technologically, that can be done by either setting limits on what genAI can do — known as guardrails — or by protecting those assets independently.

It is perhaps best to think of genAI as a toddler. Should a parent tell the child, “Don’t go into the basement, because you could be very badly hurt down there”? Or should they add a couple of high-security deadbolts to the basement door?

Ravi Ithal, CTO of Normalyze, a data security vendor, said that a recent prospect was experimenting with Microsoft Copilot. “Within the first day, an employee asked the system to see all documents with their name in it. Copilot returned dozens of documents, one of them being a confidential layoff list with the employee’s name on it. The system did what it was told to do, given that it was told things without context about what data could or could not be used for output to this employee.”

This problem should be very familiar to IT veterans. Back around 1994, during the early days of companies aggressively using the web for corporate intranets, it was a typical move for reporters to search for “confidential” and start reviewing the tons of sensitive documents that Yahoo delivered. (Google didn’t yet exist.)

In the same way that information security professionals of that era quickly learned ways to block search engine spiders and/or to place sensitive documents into areas that were blocked from such scanning, today’s CISOs must do the same with genAI.

Learn everything you can from the AI vendor

Robert Taylor, an attorney with Carstens, Allen & Gourley, said that even though most AI vendors do not disclose everything, some CIOs don’t make the effort to identify every informational morsel that the AI vendor does disclose. 

“You need to look at the vendor’s documentation, their service terms, terms of use, service descriptions, and privacy policy. The answers you may need to disclose to your end users may be buried there,” Taylor said.

“If the vendor has disclosed it to you but you fail to disclose it to your end users, you are likely on the hook. If the vendor hasn’t proactively made these disclosures, the onus is on you to ask the questions — just as customers routinely do with vendor security assessments,” he said.

Some enterprises have explored minimizing the vendor visibility issue by building their genAI programs in-house, said Meghan Anzelc, president of Three Arc Advisory, but that merely reduces the unknowns without eliminating them. That’s because even the most sophisticated enterprise IT operations are going to be leveraging some elements created by others.

“Even in the ‘build in-house’ scenario, they are either using packages in Python or services from AWS. There is almost always some third-party dependence,” she said. 

Keep humans in the loop

Although having human employees be part of genAI workflows can slow operations down and therefore reduce the efficiency that was the reason for using genAI in the first place, Taylor said sometimes a little spot checking by a human can be effective.

He cited the example of a chatbot that told an Air Canada customer they could buy a ticket immediately and get a bereavement credit later, which is not the airline’s policy. A Canadian civil tribunal ruled that the airline was responsible for reimbursing the customer because the chatbot was presented as part of the company’s website.

“Although having a human in the loop may not be technically feasible while the chat is occurring, as it would defeat the purpose of using a chatbot, you can certainly have a human in the loop immediately after the fact, perhaps on a sampling basis,” Taylor said. “[The person] could check the chatbot to see if it is hallucinating so that it can be quickly detected to reach out to affected users and also tweak the solution to prevent (hopefully) such hallucinations happening again.”

Prepare to geek out with regulators

Another compliance consideration with genAI is going to be the need to explain far more technical details than CIOs have historically had to when talking with regulators. 

“The CIO needs to be prepared to share a fairly significant amount of information, such as talking through the entire workflow process,” said Three Arc’s Anzelc. “‘Here is what our intent was.’ Listing all of the underlying information, detailing what actually happened and why it happened. Complete data lineage. Did genAI go rogue and pull data from some internet source or even make it up? What was the algorithmic construction? That’s where things get really hard.”

After an incident, enterprises have to make quick fixes to avoid repeats of the problem. “It could require redesign or adjustment to how the tool operates or the way inputs and outputs flow. In parallel, fix any gaps in monitoring metrics that were uncovered so that any future issues are identified more swiftly,” Anzelc said. 

It’s also crucial to figure out a meaningful way to calculate the impact of an incident, she added. 

“This could be financial impact to customers, as was the case with Air Canada’s chatbot, or other compliance-related issues. Examples include the potentially defamatory statements made recently by X’s chatbot Grok or employee actions such as the University of Texas professor who failed an entire class because a generative AI tool incorrectly stated that all assignments had been generated by AI and not by human students,” Anzelc said.

“Understand additional compliance implications, both from a regulatory perspective as well as the contracts and policies you have in place with customers, suppliers, and employees. You will likely need to re-estimate impact as you learn more about the root cause of the issue.”

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *