Why your gen AI projects are stuck at the starting line

Organizations need to identify the right use cases, improve data quality, and be willing to experiment

Blog
Dan Tynan

Dan TynanThe Works Contributor

Jan 09, 20246 MINS READ

Companies of every type and size are dabbling with generative AI (GAI), looking to tap its abilities to boost productivity, improve user experience, and discover valuable business insights. Nearly three-fourths of companies, according to Accenture, have made AI their top digital investment priority for 2024.  

Workers are also exploring what generative AI can do for them. More than half of all US workers are using AI chatbots for tasks such as drafting reports, brainstorming ideas, and doing background research, according to The Conference Board.

But when it comes to converting those experiments into programs that deliver business results at scale, companies have a lot of work to do. According to a recent AWS survey of chief data officers, only 19% of companies report having advanced their early GAI efforts into experiments with use cases, and a mere 6% have put them into production.  

In other words, while the adoption rate is sky-high, the execution rate of GAI initiatives lags significantly behind. What can companies do to put their gen AI projects on a faster path to ROI? According to experts we interviewed, it starts with identifying—and clearing— the biggest typical roadblocks. 

Undefined use cases
Blog

One key stumbling block is identifying the best, specific use cases for GAI. So far, the most common workplace application is helping to automate tasks in customer service. According to the AWS survey, in which MIT researchers polled more than 300 top data executives, roughly 45% of companies are using GAI tools to enhance customer support, primarily through the use of chatbots.

Between 35% and 40% of these executives also say their organizations use these tools to boost personal productivity or for accelerating software development (in the form of code assistants), and another 32% for sales and marketing (for personalized campaigns and offerings). 

While just 11% of companies say they are piloting organization-wide GAI projects (and 16% report that they have banned use of generative AI by employees), many use cases are happening at the individual level.

Subscribe to The Works

Insights on the business impact and ROI of AI. Sign up today.

Those low figures shouldn’t be a surprise. Many companies haven’t taken the necessary steps to prepare their data for AI; uploading their data to publicly available models like ChatGPT can be risky; and few companies have the data science resources in-house to build their own GAI models.  

As in nearly all cases when a popular new technology hits the workplace, it starts on an individual level. Just as employees began using smartphones at work long before employers drafted mobile-device policies, so goes GAI. Workers are leading the way by turning to publicly available tools like Google Bard to automate manual tasks and improve their productivity.

Read also: Andrew McAfee: 4 steps to fast-tracking generative AI

But even those novel efforts have their limits. For example, Namanh Hoang, a Los Angeles-based branding and marketing consultant, took on a project to create characters on packaging for a top graphics card manufacturer. He turned to two publicly available AI generators, Midjourney and Stable Diffusion, to create conceptual mockups. But because it's challenging to prompt these tools to consistently re-create the same artwork, he doesn’t use them to generate final images.

Instead, he hands off the concepts to one of his designers to finalize before delivering them to the client. That sort of use case—relying on AI to speed up the drafting process—is one that companies of all sizes should consider for gen AI applications.

"It's highly unlikely we can depend on generative AI platforms to create usable final artwork," says Hoang. "Gen AI is powerful when appropriately guided but isn't ready to be trusted to have complete autonomy over business content and communications." 

Poor data quality 
Blog

In an ideal scenario, companies would use proprietary data to fine-tune a large language model (LLM) such as Llama, OpenLM, or Mistral to perform tasks specific to a company’s needs, whether it’s a bank assessing loan applications or a global manufacturer predicting supply chain disruptions.

A company could also build and train a proprietary LLM on its own domain-specific data, but that process is expensive, can take years, and requires in-house data scientists.  Partnering with a large LLM vendor is another option to accomplish the same. In either case, most organizations aren't anywhere close to doing that because their own data isn't ready for prime AI time. 

That’s why a comprehensive data strategy is an essential prerequisite for getting value from gen AI, according to 9 out of 10 of the data executives surveyed by MIT. However, fewer than half of those executives say they have done the spadework required to prepare their data for ingesting into LLMs. Basic steps include:

  • Breaking down data silos 

  • Integrating or gathering data to train the AI

  • Ensuring data meets basic quality standards

Peggy Tsai, chief data officer at BigID, a data governance and lifecycle management platform, says many companies looking to start gen AI projects have just begun to realize they lack the core data to deploy them. 

"They need cross-functional datasets, but their data is still siloed," she says. "So they'll be spending the first part of 2024 building out their data foundations and figuring out what those datasets will look like. Right now, most enterprises are missing the data quality, privacy, security, and governance needed to do trustworthy AI." 

Most enterprises are missing the data quality, privacy, security, and governance needed to do trustworthy AI.

Peggy Tsai

Chief data officer at BigID

Some companies may not even know what's inside the unstructured data they're feeding into LLMs, adds Tsai. The data may contain Social Security numbers or other sensitive personal information that could violate regulations, such as the European Union’s General Data Protection Regulation (GDPR) law.  

"You need to really know what's inside your data before you make the decision to include it in your algorithms and models," she adds. "Standard practices for transparency and accountability with AI are mostly still a work in progress. There's a lot we haven't figured out yet."

Risk aversion

Organizations that are lagging behind with GAI need to shift their mindsets, says Tsai, and embrace experimentation.

"The leaders who approach gen AI with a mindset of experimentation and innovation are the ones who see the most success," she says.

Joel Wolfe, the president of HiredSupport, a global customer support company, was an early but skeptical adopter of generative AI tools. "Then one of our operations managers came to me and said, 'If we don't figure out how to incorporate these tools into our workflows, they may replace us altogether,'" he says. 

Today, HiredSupport’s 100+ employees use Freshworks’ Freshchat product and other helpdesk platforms to provide around-the-clock support for companies (including Shopify and Amazon stores) all over the world. His workers also use the tools to tailor email responses, improve the quality of live chat sessions, and find information faster. 

As a result, the technology has become an indispensable tool for the company, which has administrative offices in Southern California but the bulk of its workforce in Pakistan.

However, there are limits on how far down the generative AI road HiredSupport is willing to travel. 

"We go back and forth on how much to use AI in our day-to-day operations," says Wolfe. "If you use it too much, you lose that human touch that our clients and their customers look for." 

We want to hear from you! Please send us your feedback, and get informed about exciting updates from The Works. Drop us a line: theworks@freshworks.com.