March 14, 2024

VMware Private AI and DKubeX: Empowering Secure and Efficient Large Language Models for Enterprise

Introducing VMware Private AI

VMware Private AI is all about enabling companies to quickly harness the benefits of generative AI projects while ensuring the privacy and control of sensitive data, regardless of whether it's stored in a data center, public cloud, or at the edge. VMware simplifies the adoption of Private AI through two offerings: VMware Private AI Foundation with NVIDIA, and the Private AI Reference Architecture for a more open ecosystem approach.

Both VMware Private AI Reference Architecture and VMware Private AI Foundation with NVIDIA run on top of VMware Cloud Foundation (VCF) with Tanzu. This platform is crucial because it simplifies how you deploy and manage AI workloads using Kubernetes clusters and containerized apps. With VCF with Tanzu, you can create a unified infrastructure that supports both traditional and modern apps, including complex AI and machine learning workloads.

Figure 1. VMware Private AI Reference Architecture - Overview

What's DKubeX All About?

DKubeX is a LLMOps platform that is tailor-made for training Large Language Models securely with your data, whether you’re working on-premises or in multi-cloud environments. It enables enterprises to fine-tune open source LLMs like Llama2, Mistral, and MPT-7B with large proprietary datasets. It covers the whole lifecycle of LLMs, from getting your data ready to deploying Retrieval- Augmented Generation (RAG)-based chatbots.

DkubeX is built on top of standard private AI cloud foundations. VMware VMs, Kubernetes such as Tanzu, Ray, MLFlow and supports NVIDIA GPUs such as A10, A16, A30, A100, L4, L40 and more.

With proper planning of compute infrastructure DKubeX can be installed and operated within an hour and your first conversational AI application can be ready for testing within days depending on the size and complexity of your corpus. In most cases a conversational AI application with answers nearing 80% or higher accuracy can be built within 10-12 weeks.

Some of the standout features of DKubeX is its focus on security, privacy, and traceability. The SecureLLM component in DKubeX ensures that all interactions between your data and the open source LLM models or any public cloud AI models have access control and are logged during the data ingestion, RAG and LLM fine tuning phases. All interactions between the end conversational AI applications and the serving LLM models are logged as well including the human feedback responses both during dev, test and production stages. This allows traceability for quality checks and improvements on the accuracy of answers from your corpus.

Figure 2. DKubeX Architecture Overview

Bringing VMware Private AI Architecture and DKubeX Together

When you combine VMware Private AI Architecture and DKubeX, you get a great out-of-box solution for deploying Large Language Models securely and efficiently. Running DKubeX on VCF with Tanzu means you can tap into a scalable and secure infrastructure that's up to the task of handling LLM fine-tuning.

Here's what this dynamic duo can do for you:

 Securely Fine-Tune LLMs: DKubeX's focus on security, paired with VMware's robust infrastructure, means you can fine-tune LLMs by leveraging RAG or prompt techniques with sensitive data without sweating over privacy or compliance issues.

 Scale Efficiently: Integrating DKubeX with VCF with Tanzu lets you scale resources smoothly to meet the demands of LLM training and inference.

 Simplify Management: This platform simplifies the management of LLM workflow, from data ingestion and model fine-tuning to deployment, reducing operational complexity and overhead.

● Boost Collaboration: Teams can work together more effectively on LLM projects, sharing data, models, and insights securely within the VMware Private AI and DKubeX environment.

Figure 3. VMware Private AI Reference Architecture with DKubeX

Solution in Action

To see VMware Private AI Architecture and DKubeX in action, let's walk through the basic steps of deploying DKubeX and accessing a chatbot application.


Ensure you have a ready and accessible Tanzu Workload Cluster on VCF with vGPUs configured, along with Helm and kubectl installed and set up.

Figure 4. VMware VCF SDDC Manager View

Figure 5. Tanzu Kubernetes Cluster Namespace View

 Deployments Steps

  1. Add the DKubeX Helm Repository and generate the “values.yaml” file. This file contains the parameters required to deploy DKubeX.

  2. Edit the “values.yaml” file.

  3. Install DKubeX using Helm.

  4. Install and configure the SecureLLM application.

For more information, please consult DKubeX Deployment Guide.

Accessing DKube and Running a RAG Example

  1. Open up a browser then go to https://x.x.x.x/ (IP assigned to DKubeX)

  2. Sign in to DKubeX. In this case, Github was configured to provide authentication with the application. Please consult DKubeX’s documentation for more information.

    Figure 6. Github Authentication Screen

  3. Open the Terminal Application.

    Figure 7. DKubeX UI

  4. Run required commands - Please consult the RAG Example in the Quickstart - DKubeX User Guide.

    Figure 8. DKubeX Terminal

  5. Deploy an RAG-based Chatbot application with base LLM summarization. From the DKubeX UI, open and log into the SecureLLM application. Once open, click on the Admin Login button and log in using the admin credentials created during SecureLLM deployment.

    Figure 9. DKubeX UI - SecureLLM
  6. Create a new key for your application. On the API key name field, provide a unique name for the key to be created, then click Generate Key

    Figure 10. SecureLLM View


  7. Create a Chatbot application via Terminal. Run the required commands provided in the DKube example page.

  8. Once the app deployment status becomes running, you can access the application from the Apps page of DKubeX UI.

    Figure 11. DKubeX UI - Chatbot Application

  9. Ask a question - e.g. What is the difference between a unilateral and mutual NDA.

    Figure 12. Chatbot Application

Wrapping Up

VMware Private AI and DKubeX offer an attractive solution for companies looking to get the most out of Large Language Models while keeping things secure and efficient. By combining VMware's solid infrastructure with DKubeX's specialized LLM platform, you can unlock the full potential of AI to drive innovation and success in our fast-paced digital world.

Useful Resources

VMware Cloud Foundation - Leading Multi-Cloud Platform

VMware Cloud Foundation with VMware Tanzu

DKube X - An AI Framework to Leverage LLMs For Enterprise| DKube

DKubeX Overview - DKubeX User Guide

DKubeX User Guide - Examples

Filter Tags