Generative AI for Enterprises – Learning From the Trenches

Brain made with shining wireframe - ai learning concept
172

Neha Gupta at Uniphore takes a look at Generative AI for enterprises and the lessons learned from the trenches.

As generative AI moves from hype phase towards maturity, more enterprises are actively integrating Large Language Models (LLMs) and multimodal models into their products and realizing the challenges involved in the process.

This transformative wave, spearheaded by GenAI/LLMs, represents a paradigm shift, empowering machines to process, comprehend and generate information in unprecedented ways, especially in the realms of text, images and videos. Some even argue that this revolution could surpass the impact of the Internet itself.

LLMs boast remarkable generalizability, effortlessly navigating diverse domains right out of the box, commonly denoted as zero-shot inferencing.

Their adaptability is further refined through fine-tuning. They are proficient at generating information based on human language instructions and examples, known as prompt engineering.

However, transitioning from demos to real-world enterprise solutions poses challenges. The industry is grappling with issues (such as closed domain vs. open domain hallucinations), safety concerns associated with LLMs, toxicity, offensive answers and the substantial efforts required for evaluating and benchmarking such systems.

There are additional hurdles with large enterprises where technical problems intertwine with organizational and process-oriented issues.

For instance, on AI projects, should experts be concentrated in a single group capable of solving problems across different teams or embedded across groups? Another key consideration is determining which primary problems to tackle first.

Our focus is on developing state-of-the-art AI solutions for enterprises. Throughout this journey, we’ve gleaned insights applicable to enterprises across multiple industries.

What we’ve learned extends across all enterprise applications, serving as valuable guidelines for navigating the intricate landscape of AI integration in diverse business settings.

Here are the three biggest lessons we’ve learned from the frontline trenches of generative AI development:

Enterprises Need One Cohesive Framework for All AI Applications

For products to be effective, a generic framework to handle the enterprise AI problem needs to be built that can be scaled both horizontally (across different use cases) and vertically (in which more depth and building blocks can be added, keeping the core piece intact).

We propose a layered approach that breaks down the problem into parts that can be developed independently yet keeps them cohesive. They are:

Knowledge Layer

In most enterprise use cases, we want the AI models to get reference/context from enterprise documents, not from the internet. To do so, we need to provide the core model knowledge and connectors to the enterprise documents and conversations.

The knowledge layer consists of document ingestors with a datastore (vector DB store, potentially) and connectors of different media/data files from the enterprise.

AI Inference Layer

This layer consists of a series of AI services made up of in-house or third-party models that are the core/brain of the system.

While LLMs/large models may form the core, they are not a hammer that can be applied to all solutions. Instead, post- and pre-processing layers of several smaller ML models and guardrails need to be applied to create an enterprise-ready solution. There is also an additional orchestrator piece that determines when which model will be called.

Co-pilot Apps/APIs

The apps sit on top of the knowledge and AI inference layer. Examples at this layer may be enterprise specific (i.e., summarization of conversations, chatbots, supportive question/answering system, entity/slot detection, language translation, etc.)

Having Common Metrics to Measure AI Solutions Is Essential

Along with a common framework, we need a common way to measure AI solutions. We use the following four metrics in order of priority to decide no/no-go into product:

Accuracy

Measuring accuracy in generative outputs can be challenging. It involves understanding from an ML/NLP perspective how good the model is with respect to the task at hand.

Typical metrics include precision/recall (which needs to be derived from information segments) and BLUE/ROUGE. For LLMs, there is also an increased need for human ratings-based feedback on the quality of the solution.

Latency

When an AI output is required on a real-time basis, latency of applications become super critical and the second most important metric to optimize for. There are several libraries available that can be used to optimize large models, including c-translate2, vllms, Tensor RT-LLM and more.

Concurrency

Depending on the workload, enterprises need to know how many concurrent servers (GPUs) are needed to provide support to the product.

In the example of call summarization for call centre assistants, if the expectation is for the call centre to receive summaries for 600 concurrent agents, we will need to be able to match these concurrency needs together with the latency requirements.

Cost

Because LLMs inference has high computational demands, it is critical to consider the availability and cost of GPUs before proposing a generative AI solution in a product.

To manage expectations, enterprises need accurate usage calculations with reasonable margins before the deployment of a product.

Dataset Curation Is Critical to Developing New Algorithms and Models

As you mature in the AI lifecycle from prompt engineering to fine-tuning to training of your own model, you need to curate your dataset to help benchmark new algorithms/models.

Regardless of the algorithm, having access to the dataset(s) that can help the model learn is critical. For the algorithm to perform best on your datasets and your use cases, you need to either fine-tune or pre-train a model on the datasets that are closest to the production use-case.

Key dataset curation considerations include:

Internally Available Data

Consistency and accuracy have been known issues, especially with open-source LLMs. Fine-tuning and training an internal model (if resources are available) can improve the performance of the algorithms.

For example, by instruction fine-tuning LLMs for our own use cases, we have been able to develop small, fine-tuned models that perform better than a model 10 times its size.

Third-Party Vendor Data

When customer data is scarce, the best way to get data to train/tune the model is through third-party vendors.

Many have domain-specific, off-the-shelf and tailor-made datasets for videos, speech, receipts, etc. Getting these datasets manually, or synthetically annotated for the task is critical to train and benchmark.

Open-Source Datasets

There is a plethora of open-source datasets that the research community has made available to build next-gen AI applications, examples include HotPot Q/A, Prosocial Dialogues, Empathetic Dialogues, Massive.

Using these datasets to complement real-world domain data can help models learn relevant social dialog traits, as well as general domain entities/intents that occur in specific industries (i.e., travel and hospitality, banking).

In summary, navigating the enterprise AI revolution involves adopting three crucial strategies:

  1. Implementing a unified framework for all AI applications,
  2. Standardizing metrics for benchmarking, and
  3. Establishing a robust system for data curation and training.

Effectively executing these also requires the building (or buying) of an ML infrastructure and optimization framework to ensure high-speed performance. Stay tuned for more detailed insights on this key component in our upcoming blog posts.

This blog post has been re-published by kind permission of Uniphore – View the Original Article

For more information about Uniphore - visit the Uniphore Website

About Uniphore

Uniphore Uniphore has built the most comprehensive and powerful platform that combines conversational AI and automation, computer vision, emotion and tonal analysis, workflow automation, and RPA in a single integrated platform.

Read other posts by Uniphore

Call Centre Helper is not responsible for the content of these guest blog posts. The opinions expressed in this article are those of the author, and do not necessarily reflect those of Call Centre Helper.

Author: Uniphore

Published On: 15th Apr 2024 - Last modified: 16th Apr 2024
Read more about - Industry Insights, ,

Follow Us on LinkedIn

Recommended Articles

Three fingers pointing at three lightbulbs - information and ideas concept
A Primer on ChatGPT, LLMs, and Generative AI
Question mark on speech bubbles
A Primer on ChatGPT, LLMs, and Generative AI
Podcast: Contact Centre AI: What are your options?
Artificial Intelligence vs. Machine Learning vs. Deep Learning