Why Every Company Needs a Generative AI Center of Excellence

Generative AI solutions enrich user prompts with proprietary data, and grass-roots adoption of generative AI poses significant security, privacy, and compliance challenges. To manage these risks, companies must create Generative AI Centers of Excellence.

For the first time in human history, AI can be provided and consumed as a service, creating many opportunities for using AI in the enterprise.

Specifically, companies can now use AI without a significant upfront investment previously required to clean data, engineer features, train, deploy, and monitor models.

Unfortunately, this also presents a set of new challenges, especially in the areas of security, compliance, and control over proprietary data.

To see why, consider a high-level diagram of a typical generative AI assistant presented below:

There are three parties involved in this solution:

  • You – the company using the assistant.
  • Vendor A – the vendor providing the AI assistant.
  • Vendor B – the vendor providing the large foundation model.

Vendor A hosts a semantic index for the proprietary data owned by the company. The index is used to enrich user requests with relevant data before they are forwarded to the large foundation model. Without proprietary data, the model’s responses would be useless: too generic and outdated.

Consider now a company where generative AI assistants are adopted bottom-up in a grass-roots fashion.

As a result of this process, the company’s proprietary data is replicated across multiple semantic indices maintained by different vendors. Each index contains a copy of the data and presents a risk of a data leak.

In addition, the company has no control over which foundation model providers are used by the assistants. Some of them may be tempted to use the company’s proprietary data to improve their models, yet another risk of a data leak.

Fortunately, we are familiar with this type of challenges and know how to handle them.

In the past, enterprises had similar concerns about cloud services, APIs, and mobile apps. Some of us remember carrying two mobile phones: a Blackberry issued by the company and an iPhone for personal use.

In all cases, the problem was solved by applying the platform/application pattern:

  1. Access to third-party services is performed via a centrally managed platform that enforces security and regulatory compliance.
  2. Applications use the APIs provided by the platform.
  3. The platform provides low-code tools that enable citizen developers to create new solutions.

The same approach can be used to address the security and compliance challenges presented by generative AI solutions.

In particular, the company can deploy a semantic broker that can support any number of generative AI assistants and agents used by the various organizational units.

The semantic broker can be centrally managed by a Generative AI Center of Excellence, which can also be responsible for establishing best practices, processes, solution patterns, and personnel training, all the vital components of the business AI Quotient (AQ).

Can GPT-4 Outsmart Wall Street Stock Pickers?

I provided a large language model with last year's market outlook reports and asked it to generate aggressive growth portfolios. Next, I backtested the portfolios against historical market data. Here are the results.

Market Segmentation and Lead Enrichment with LLMs

Market segmentation is an essential part of any go-to-market strategy. Segmenting a large list of prospects can be very challenging. Fortunately, it is precisely the kind of work that large language models excel at, but it does take a few tricks.

AI Takers vs. Shapers: Harnessing Generative AI for Competitive Edge

With AI assistants popping up in every tool and service, it's easy to be an AI taker. However, using generally available tools does not build a lasting competitive advantage. One must learn to shape AI to make a difference.