Reducing risk with generative AI: 5 key strategies

As early adopters race to implement gen AI, enterprise risk leaders need to respond just as quickly

Sam Cockroft

Sam CockroftThe Works contributor

Sep 06, 20237 MINS READ

Artificial intelligence has a long history of arousing suspicion and uncertainty. With the breakneck adoption rate of generative AI, though, FOMO appears to be the biggest AI worry in the C-suite. 

Let’s be clear: The business risks of generative AI are real. But CIOs and others needn’t fear them; they should simply get on top of managing them as early as possible.

As new generative AI applications begin tangling with databases, cloud systems, and other software, many potential conflicts will emerge—related to regulation, security, privacy, misinformation, and bias. That’s one reason why two-thirds of senior-level risk executives already consider generative AI a top emerging concern, according to an August 2023 Gartner report. They consider it a higher-level threat than their own financial planning.

“We tend to think [AI risk] is just a Big Tech problem. It's just a problem of the company that's building the tool,” Beena Ammanath, a leading researcher in AI governance, told Protocol. “But the companies that are using it are equally responsible.”

People-first AI is transforming service. Are you ready?

Freshworks AI Summit: Register now

To find out how early adopters of generative AI should be tackling “the problem,” we asked a number of AI experts, practitioners, and executives for their thoughts and recommendations. Here are five strategies culled from our reporting that can help companies to manage risk when deploying generative AI.  

1. Recruit a regulatory compliance team

AI is still very much in its infancy in the business universe—as are most of the regulatory bodies created to govern it. The Artificial Intelligence Act from the European Commission is considered the first comprehensive legal framework for AI, and it was only proposed in April 2021. 

Accenture reports that 62% of companies have developed an AI governance structure, but just 4% have established a comprehensive cross-functional compliance team to implement it. Building such teams prepares companies to meet new regulations—and has the potential to offer a competitive advantage in a world where consumers are becoming more concerned about digital safety.

Otherwise, a lack of preparedness could leave organizations playing catch-up, requiring costly restructuring. The good news? More than 77% of organizations, according to Accenture, are prepared to make compliance with future regulations a priority. Businesses see the value in AI and are willing to take proactive measures like compliance and safety to scale it effectively.

2. Define strict data security and privacy measures

Programmers rely on vast quantities of data to accurately train AI models. But where this data comes from can be cause for unease, especially for consumers. Consumer data is often used to train AI, which makes data privacy an even more important consideration for organizations. A 2022 Cisco survey revealed that 60% of consumers are concerned about how organizations are using their personal data for AI. 

Fortunately, businesses are already prioritizing those risks. 92% of leaders admit their organizations need to do more to reassure customers about how their data is being used, according to another Cisco study. Employees should be taught what systems are approved for company use, what kind of information is able to be shared with the system, and how to engineer prompts to yield the best results.

Another essential step is creating a trusted ecosystem in which to work with generative AI. Harman Singh, an expert in cybersecurity and managing consultant at Cyphere, thinks companies haven’t been giving enough attention to this aspect of protection.

“Companies are slow in adopting self-hosted [large language] models in their environments. Generative AI used in publicly hosted LLMs such as ChatGPT and Bard already has the ability to read your queries,” says Singh. “But if models are hosted and trained internally, then employees using such models would find it suitable to do business within the confined environments.”

Companies don’t have to go as far as creating their own models. They can lower risk by relying on safer software such Private AI’s PrivateGPT, which redacts sensitive data while using ChatGPT so employees can use the tool safely. In a similar vein, CalypsoAI, a startup that validates the security of AI apps, is helping companies prevent sharing of sensitive data while also identifying weaknesses susceptible to attack by external tools.

3. Be especially vigilant in following IP and copyright laws

Already employees can use generative AI to help draft sales emails or create graphics for a presentation. But companies need to ensure they have the right to use this content. 

Like overall AI regulatory compliance, AI copyright and intellectual property (IP) laws are not yet clear-cut. One of the first cases in U.S. copyright law initially granted copyright protection to the author of a graphic novel with AI-generated images—however, this protection has since been revoked. Similarly, Midjourney and Stable Diffusion, two popular AI image generation tools, were recently under fire for using artist and photographer content without permission. 

Many questions still linger about who the copyright belongs to when it comes to AI-generated content—and if the content can be copyrighted at all. 

This gray area raises an important consideration for businesses that employ those tools. If a company uses an AI generator that was trained on content the developers did not have permission to use, the legal implication could fall back on the business itself. 

Before employing AI tools, companies should ensure protections are included in any contracts with their technology partners. That includes ensuring the AI platform has proper licensure of training data and indemnification in case the AI platform fails to follow IP laws. 

Disclosing use of AI in content and ensuring proper licensing from the get-go can protect companies. Otherwise, they run the risk of legal ramifications as regulations emerge.

4. Train against misinformation and fake content

AI content generation tools make it easier for people to create fake content and spread misinformation both intentionally and unintentionally.

To reduce that risk, businesses should rely on AI tools as assistants, not creators. Integrating tools like Fathom, an AI meeting notetaker, or Rev, an AI transcription service, instead of using full-fledged content generators can streamline business processes while still maintaining crucial human oversight. 

While companies need to be cautious about the information they produce, they also need to be aware of what their own content could become. “Deepfakes,” or content digitally manipulated to mimic another person’s face and/or voice, are an increasingly popular tool for scammers, harassers, and criminals. 

Deepfakes open another avenue for targeted attacks and fraud. This content can be used to blackmail leadership or to mimic a company executive in order to ply sensitive information from employees.

This is another area where a cross-functional AI governance team can play a crucial role: educating employees on what to look for in a deepfake. There are several ways to detect fake content—including jerky movements and speech, poor syncing, or shifts in lighting and skin tone. While these strategies are useful, the technology is improving every day—it’s not a foolproof solution. Continued education will be critical.

5. Consider bias during development and implementation

Bias remains one of the most significant challenges as AI becomes more pervasive. 

This is largely because bias is contained within the underlying data that trains AI models. When trying to address it, humans can end up compounding the problem. ChatGPT, for example, relies on reinforcement learning with human feedback to address bias, but this practice adds new elements of human variation. Leaders should remember that bias is essentially an inherent piece of AI technology. They can mitigate its risks, but they can’t eliminate it.

Part of addressing AI in technology must include addressing bias in ourselves. Diversifying teams of developers working with AI and holding teams to a higher standard can go a long way. Organizations can also implement dedicated research, training, and oversight programs. 

Policies are already emerging to reduce the risk of bias in technology: New York City recently enacted a new law, the first of its kind, to protect against AI bias in hiring. Companies are now required to disclose to candidates any use of AI hiring tools and what data is being collected and used in the process. In addition, the tools and companies are subject to annual audits. Though some experts in the field are skeptical about whether it will be enough to address the issue, it is a starting point—and a much-needed push for companies to incorporate anti-bias frameworks into their processes.

As AI’s capabilities continue to grow, it will be even more critical to incorporate governance practices ahead of regulations.

Even as an expert in the field, AI researcher Ammanath believes navigating the murky waters of ethical AI is worth it: “I don't think it's all bad; [neither] is it all good. There are risks with it, and we need to address it. I want to bring more of that balanced perspective, a pragmatic, optimistic perspective.”

We want to hear from you! Please send us your feedback, and get informed about exciting updates from The Works. Drop us a line: theworks@freshworks.com.