The returns from AI projects are ‘dismal’, business leaders complain

Companies have become more cautious about investing in artificial intelligence tools due to concerns about cost, data security and safety, according to a study conducted by Lucidworks, a provider of e-commerce search and customer service applications.

“The honeymoon phase of generative AI is over,” the company said in its Generative AI Global Benchmark Study 2024, published Tuesday. “While leaders remain excited about its potential to transform businesses, the initial euphoria has given way to a more measured approach.”

Between April and May 2024, Lucidworks conducted a survey of business leaders involved in AI adoption in North America, EMEA and the APAC region. The respondents, it is claimed, were from 1,000 companies with 100 or more employees across 14 industries, all of which reportedly have active AI initiatives underway.

About 23 percent are executives and about 50 percent are managers, with 86 percent involved in technology decision-making. Thirty-nine percent of participants were from North America, 36 percent from EMEA and 24 percent from the APAC region.

According to the survey results, 63 percent of global companies plan to increase spending on AI over the next 12 months, compared to 93 percent in 2023 when Lucidworks conducted its first survey.

Broken down by region, the highest percentage of those planning to spend more was among US respondents, at 69 percent. But in the APAC region, less than half (49 percent) of Chinese business leaders expect to increase AI spending, down from last year when all respondents said they would spend more.

Of all organizations, 36 percent planned to keep AI spending the same through 2024, compared to just 6 percent last year.

The delay can be attributed to several factors.

One problem is that AI hasn’t yet paid off for those trying to make it work. “Unfortunately, the financial benefits of implemented projects are dismal,” the study says. “Forty-two percent of companies have not yet seen a significant benefit from their generative AI initiatives.”

And to realize meaningful benefits, it’s necessary to move beyond the pilot testing phase, something few companies have been able to do. Only 25 percent of planned investments in generative AI have been completed so far, the study said.

Another reason is growing concerns about the costs of AI projects, which are fourteen times higher than last year. There are also many more concerns (5x more) about the accuracy of AI systems’ responses.

The costs involved in strategically deploying generative AI can be significant

In terms of cost, about 49 percent of organizations have chosen to use commercial LLMs, such as Google’s Gemini and OpenAI’s ChatGPT. Another 30 percent use both commercial and open-source LLMs, while only 21 percent have bet exclusively on open-source LLMs such as Llama 3 and Mistral. The Lucidworks research predicts that the balance will shift toward open-source LLMs based on expected performance improvements from open-source models and cost considerations.

“The costs associated with strategically deploying generative AI can absolutely be significant, regardless of whether you host your own large language models or use commercial APIs,” said Eric Redman, senior director of product for data science at Lucidworks, in an e-mail mail. Unpleasant The register.

“These initial costs are similar in the grand scheme of things, but are really just the tip of the iceberg.”

According to Redman, security and accuracy, response coordination with policy, data acquisition costs and keeping costs under control all need to be considered.

“The bottom line is that ensuring AI security, accurate AI responses and responsible data acquisition all come at a price,” Redman said. “Cutting corners in these areas can lead to inaccurate or inappropriate responses, ultimately undermining the value and effectiveness of your AI implementation.”

Among the organizations surveyed, the top generative AI initiatives focused on governance (standardizing models to ensure alignment, limiting access to generative AI tools based on role, etc.) and cost reduction, both general and administrative (Q&A testing, debugging, code suggestions, and HR help documentation).

The research shows that qualitative applications, which use text and deliver limited response, have been the most successful generative AI initiatives, accounting for around a quarter of successful implementations. In concrete terms, this concerns projects for generating FAQs and providing HR support. And they are usually the easiest to implement.

Applications with a quantitative component – ​​using generative AI to monitor, predict, analyze, optimize, prioritize and other more challenging projects – have had a harder time, with less than 15 percent successfully implemented. As the research shows, this involves optimizing search results, screening applicants, supporting the closing of financial results, and so on.

Redman said the popularity of code generation as one of the most important applications makes a lot of sense. “It’s essentially a great example of how AI copilots can empower knowledge workers,” he explained.

“These copilots have proven to be valuable in a variety of creative tasks, whether writing code, drafting documents or more. The beauty lies in their collaborative nature: they offer suggestions and support, but the final decision and responsibility for the output lies firmly with the human user. This is in stark contrast to, for example, a chatbot that communicates with customers, where the AI ​​may have a greater degree of autonomy.

And Redman said it’s also not surprising that AI governance has been a key common initiative.

“Given the powerful generative AI capabilities, organizations naturally prioritize risk management,” he says.

“Understanding and mitigating the risks associated with any AI application is paramount, especially given the growing regulatory landscape such as the EU AI Act. These regulations emphasize transparency and user control, allowing individuals to understand how their data is used and have choices in their interactions with AI systems.” ®

Leave a Comment